Test Report: Docker_Linux_docker_arm64 22402

                    
                      783b0304fb34eb1d9554b20c324bb66df0781ba8:2026-01-11:43196
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 507.47
53 TestForceSystemdEnv 506.9
x
+
TestForceSystemdFlag (507.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0111 08:03:14.555279  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:04:20.375226  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.800479  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.805934  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.816268  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.836568  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.877111  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:02.957460  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:03.117900  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:03.438632  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:04.079653  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:05.360123  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:07.920343  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:13.040880  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:23.282033  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:05:43.762995  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:06:17.323020  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:06:24.723881  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:07:46.644142  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:08:14.554946  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:10:02.801703  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:10:30.486653  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m22.906915361s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-176470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-176470" primary control-plane node in "force-systemd-flag-176470" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:02:55.219760  510536 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:02:55.219965  510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:55.219993  510536 out.go:374] Setting ErrFile to fd 2...
	I0111 08:02:55.220012  510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:55.220685  510536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 08:02:55.221284  510536 out.go:368] Setting JSON to false
	I0111 08:02:55.222163  510536 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9925,"bootTime":1768108650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 08:02:55.222344  510536 start.go:143] virtualization:  
	I0111 08:02:55.225197  510536 out.go:179] * [force-systemd-flag-176470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:02:55.227752  510536 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:02:55.227908  510536 notify.go:221] Checking for updates...
	I0111 08:02:55.233637  510536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:02:55.236621  510536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 08:02:55.239599  510536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 08:02:55.242477  510536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:02:55.245433  510536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:02:55.248887  510536 config.go:182] Loaded profile config "force-systemd-env-081796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:02:55.249012  510536 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:02:55.278955  510536 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:02:55.279151  510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:55.340253  510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.330464883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:55.340364  510536 docker.go:319] overlay module found
	I0111 08:02:55.343549  510536 out.go:179] * Using the docker driver based on user configuration
	I0111 08:02:55.346477  510536 start.go:309] selected driver: docker
	I0111 08:02:55.346500  510536 start.go:928] validating driver "docker" against <nil>
	I0111 08:02:55.346516  510536 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:02:55.347367  510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:55.398049  510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.38897404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:55.398208  510536 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:02:55.398435  510536 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:02:55.401352  510536 out.go:179] * Using Docker driver with root privileges
	I0111 08:02:55.404169  510536 cni.go:84] Creating CNI manager for ""
	I0111 08:02:55.404240  510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:02:55.404253  510536 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0111 08:02:55.404339  510536 start.go:353] cluster config:
	{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:02:55.407425  510536 out.go:179] * Starting "force-systemd-flag-176470" primary control-plane node in "force-systemd-flag-176470" cluster
	I0111 08:02:55.410185  510536 cache.go:134] Beginning downloading kic base image for docker with docker
	I0111 08:02:55.413170  510536 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:02:55.415999  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:55.416053  510536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0111 08:02:55.416067  510536 cache.go:65] Caching tarball of preloaded images
	I0111 08:02:55.416071  510536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:02:55.416162  510536 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:02:55.416173  510536 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0111 08:02:55.416278  510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
	I0111 08:02:55.416296  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json: {Name:mkca1c7e6f1f75138479137408eba180dfbb6698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:55.436232  510536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:02:55.436255  510536 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:02:55.436276  510536 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:02:55.436313  510536 start.go:360] acquireMachinesLock for force-systemd-flag-176470: {Name:mk069654716209309832bc30167c071b9142dd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:02:55.436420  510536 start.go:364] duration metric: took 86.972µs to acquireMachinesLock for "force-systemd-flag-176470"
	I0111 08:02:55.436450  510536 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0111 08:02:55.436517  510536 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:02:55.440079  510536 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:02:55.440330  510536 start.go:159] libmachine.API.Create for "force-systemd-flag-176470" (driver="docker")
	I0111 08:02:55.440371  510536 client.go:173] LocalClient.Create starting
	I0111 08:02:55.440473  510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
	I0111 08:02:55.440510  510536 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:55.440529  510536 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:55.440585  510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
	I0111 08:02:55.440606  510536 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:55.440635  510536 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:55.441019  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:02:55.456590  510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:02:55.456686  510536 network_create.go:284] running [docker network inspect force-systemd-flag-176470] to gather additional debugging logs...
	I0111 08:02:55.456707  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470
	W0111 08:02:55.472891  510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 returned with exit code 1
	I0111 08:02:55.472925  510536 network_create.go:287] error running [docker network inspect force-systemd-flag-176470]: docker network inspect force-systemd-flag-176470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-176470 not found
	I0111 08:02:55.472944  510536 network_create.go:289] output of [docker network inspect force-systemd-flag-176470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-176470 not found
	
	** /stderr **
	I0111 08:02:55.473054  510536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:02:55.489682  510536 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
	I0111 08:02:55.490078  510536 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
	I0111 08:02:55.490313  510536 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
	I0111 08:02:55.490763  510536 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16310}
	I0111 08:02:55.490793  510536 network_create.go:124] attempt to create docker network force-systemd-flag-176470 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:02:55.490879  510536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-176470 force-systemd-flag-176470
	I0111 08:02:55.555925  510536 network_create.go:108] docker network force-systemd-flag-176470 192.168.76.0/24 created
	I0111 08:02:55.555959  510536 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-176470" container
	I0111 08:02:55.556048  510536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:02:55.573066  510536 cli_runner.go:164] Run: docker volume create force-systemd-flag-176470 --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:02:55.592089  510536 oci.go:103] Successfully created a docker volume force-systemd-flag-176470
	I0111 08:02:55.592203  510536 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-176470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --entrypoint /usr/bin/test -v force-systemd-flag-176470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:02:56.131269  510536 oci.go:107] Successfully prepared a docker volume force-systemd-flag-176470
	I0111 08:02:56.131324  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:56.131342  510536 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:02:56.131410  510536 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:02:59.422056  510536 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.290602576s)
	I0111 08:02:59.422088  510536 kic.go:203] duration metric: took 3.290742215s to extract preloaded images to volume ...
	W0111 08:02:59.422241  510536 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:02:59.422362  510536 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:02:59.471640  510536 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-176470 --name force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-176470 --network force-systemd-flag-176470 --ip 192.168.76.2 --volume force-systemd-flag-176470:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:02:59.799854  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Running}}
	I0111 08:02:59.823185  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:02:59.842792  510536 cli_runner.go:164] Run: docker exec force-systemd-flag-176470 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:02:59.903035  510536 oci.go:144] the created container "force-systemd-flag-176470" has a running status.
	I0111 08:02:59.903064  510536 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa...
	I0111 08:03:00.642486  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:03:00.642605  510536 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:03:00.666293  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:03:00.685140  510536 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:03:00.685163  510536 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-176470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:03:00.728771  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:03:00.747446  510536 machine.go:94] provisionDockerMachine start ...
	I0111 08:03:00.747552  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:00.765376  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:00.765734  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:00.765754  510536 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:03:00.766557  510536 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 08:03:03.914487  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
	
	I0111 08:03:03.914512  510536 ubuntu.go:182] provisioning hostname "force-systemd-flag-176470"
	I0111 08:03:03.914586  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:03.932237  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:03.932556  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:03.932573  510536 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-176470 && echo "force-systemd-flag-176470" | sudo tee /etc/hostname
	I0111 08:03:04.105837  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
	
	I0111 08:03:04.105961  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.127215  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:04.127623  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:04.127644  510536 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-176470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-176470/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-176470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:03:04.279132  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:03:04.279202  510536 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
	I0111 08:03:04.279237  510536 ubuntu.go:190] setting up certificates
	I0111 08:03:04.279260  510536 provision.go:84] configureAuth start
	I0111 08:03:04.279342  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:04.297243  510536 provision.go:143] copyHostCerts
	I0111 08:03:04.297285  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:03:04.297322  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
	I0111 08:03:04.297328  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:03:04.297407  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
	I0111 08:03:04.297482  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:03:04.297498  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
	I0111 08:03:04.297502  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:03:04.297526  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
	I0111 08:03:04.297563  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:03:04.297578  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
	I0111 08:03:04.297583  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:03:04.297605  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
	I0111 08:03:04.297646  510536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-176470 san=[127.0.0.1 192.168.76.2 force-systemd-flag-176470 localhost minikube]
	I0111 08:03:04.676341  510536 provision.go:177] copyRemoteCerts
	I0111 08:03:04.676407  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:03:04.676452  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.695533  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:04.802703  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:03:04.802763  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 08:03:04.821902  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:03:04.821976  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:03:04.840427  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:03:04.840528  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:03:04.858272  510536 provision.go:87] duration metric: took 578.972579ms to configureAuth
	I0111 08:03:04.858355  510536 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:03:04.858554  510536 config.go:182] Loaded profile config "force-systemd-flag-176470": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:03:04.858617  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.880754  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:04.881061  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:04.881071  510536 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0111 08:03:05.036241  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0111 08:03:05.036263  510536 ubuntu.go:71] root file system type: overlay
	I0111 08:03:05.036379  510536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0111 08:03:05.036456  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:05.055990  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:05.056308  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:05.056396  510536 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0111 08:03:05.217159  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0111 08:03:05.217244  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:05.236377  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:05.236706  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:05.236730  510536 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0111 08:03:06.213777  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2026-01-08 19:56:21.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-11 08:03:05.213214607 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0111 08:03:06.213812  510536 machine.go:97] duration metric: took 5.46634075s to provisionDockerMachine
	I0111 08:03:06.213825  510536 client.go:176] duration metric: took 10.773442328s to LocalClient.Create
	I0111 08:03:06.213873  510536 start.go:167] duration metric: took 10.773542862s to libmachine.API.Create "force-systemd-flag-176470"
	I0111 08:03:06.213889  510536 start.go:293] postStartSetup for "force-systemd-flag-176470" (driver="docker")
	I0111 08:03:06.213900  510536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:03:06.213976  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:03:06.214038  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.233489  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.338958  510536 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:03:06.342424  510536 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:03:06.342452  510536 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:03:06.342463  510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
	I0111 08:03:06.342538  510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
	I0111 08:03:06.342671  510536 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
	I0111 08:03:06.342685  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
	I0111 08:03:06.342793  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:03:06.351211  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:03:06.369285  510536 start.go:296] duration metric: took 155.381043ms for postStartSetup
	I0111 08:03:06.369638  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:06.399155  510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
	I0111 08:03:06.399451  510536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:03:06.399491  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.417476  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.520083  510536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:03:06.524815  510536 start.go:128] duration metric: took 11.088280156s to createHost
	I0111 08:03:06.524841  510536 start.go:83] releasing machines lock for "force-systemd-flag-176470", held for 11.088407356s
	I0111 08:03:06.524937  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:06.541461  510536 ssh_runner.go:195] Run: cat /version.json
	I0111 08:03:06.541495  510536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:03:06.541521  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.541568  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.561814  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.578227  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.765197  510536 ssh_runner.go:195] Run: systemctl --version
	I0111 08:03:06.771777  510536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:03:06.776029  510536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:03:06.776122  510536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:03:06.804486  510536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:03:06.804565  510536 start.go:496] detecting cgroup driver to use...
	I0111 08:03:06.804592  510536 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:03:06.804767  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:03:06.818674  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:03:06.828002  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:03:06.837067  510536 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:03:06.837138  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:03:06.845964  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:03:06.855049  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:03:06.863676  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:03:06.872497  510536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:03:06.880973  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:03:06.890121  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:03:06.899090  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:03:06.908147  510536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:03:06.915960  510536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:03:06.923607  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:07.033909  510536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:03:07.138594  510536 start.go:496] detecting cgroup driver to use...
	I0111 08:03:07.138622  510536 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:03:07.138676  510536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0111 08:03:07.154245  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:03:07.172345  510536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:03:07.221655  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:03:07.234818  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:03:07.247793  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:03:07.261501  510536 ssh_runner.go:195] Run: which cri-dockerd
	I0111 08:03:07.264985  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0111 08:03:07.272438  510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0111 08:03:07.284695  510536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0111 08:03:07.404970  510536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0111 08:03:07.524732  510536 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0111 08:03:07.524836  510536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0111 08:03:07.537550  510536 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0111 08:03:07.550391  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:07.666047  510536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0111 08:03:08.113136  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:03:08.126395  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0111 08:03:08.140492  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:03:08.154283  510536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0111 08:03:08.276152  510536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0111 08:03:08.399843  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:08.519920  510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0111 08:03:08.535880  510536 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0111 08:03:08.548954  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:08.674253  510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0111 08:03:08.750422  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:03:08.764674  510536 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0111 08:03:08.764745  510536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0111 08:03:08.770190  510536 start.go:574] Will wait 60s for crictl version
	I0111 08:03:08.770257  510536 ssh_runner.go:195] Run: which crictl
	I0111 08:03:08.773920  510536 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:03:08.803610  510536 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.4
	RuntimeApiVersion:  v1
	I0111 08:03:08.803693  510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:03:08.828423  510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:03:08.856514  510536 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
	I0111 08:03:08.856630  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:03:08.871466  510536 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:03:08.876325  510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:03:08.886543  510536 kubeadm.go:884] updating cluster {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:03:08.886659  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:03:08.886724  510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:03:08.904762  510536 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:03:08.904783  510536 docker.go:624] Images already preloaded, skipping extraction
	I0111 08:03:08.904854  510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:03:08.922241  510536 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:03:08.922264  510536 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:03:08.922278  510536 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0111 08:03:08.922378  510536 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-176470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:03:08.922440  510536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0111 08:03:08.974284  510536 cni.go:84] Creating CNI manager for ""
	I0111 08:03:08.974315  510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:03:08.974351  510536 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:03:08.974374  510536 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-176470 NodeName:force-systemd-flag-176470 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:03:08.974535  510536 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-176470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:03:08.974611  510536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:03:08.983370  510536 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:03:08.983451  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:03:08.991625  510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0111 08:03:09.006053  510536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:03:09.021805  510536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0111 08:03:09.035826  510536 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:03:09.039822  510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:03:09.049986  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:09.169621  510536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:03:09.185723  510536 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470 for IP: 192.168.76.2
	I0111 08:03:09.185747  510536 certs.go:195] generating shared ca certs ...
	I0111 08:03:09.185764  510536 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.185898  510536 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
	I0111 08:03:09.185958  510536 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
	I0111 08:03:09.185971  510536 certs.go:257] generating profile certs ...
	I0111 08:03:09.186038  510536 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key
	I0111 08:03:09.186055  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt with IP's: []
	I0111 08:03:09.419531  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt ...
	I0111 08:03:09.419571  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt: {Name:mk9418e58d3186bffe31b727378fd0d08defb8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.419773  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key ...
	I0111 08:03:09.419788  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key: {Name:mk349358a2ff97e24a0ee5565acc755705e64bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.419881  510536 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861
	I0111 08:03:09.419901  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:03:09.847845  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 ...
	I0111 08:03:09.847876  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861: {Name:mk412a98969fba1e6fc51a9a93b9bc1d873d6a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.848059  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 ...
	I0111 08:03:09.848075  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861: {Name:mkc93f76b0581a0b9e089b7481afceecd0c3c04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.848163  510536 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt
	I0111 08:03:09.848240  510536 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key
	I0111 08:03:09.848303  510536 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key
	I0111 08:03:09.848323  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt with IP's: []
	I0111 08:03:10.141613  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt ...
	I0111 08:03:10.141647  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt: {Name:mk6dace60bb0b0492d37d0756683e679aa0ab1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:10.141875  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key ...
	I0111 08:03:10.141891  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key: {Name:mk0b0880a2b49969d86a957c1c38bf80a6fa094b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:10.141982  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:03:10.142003  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:03:10.142022  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:03:10.142034  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:03:10.142051  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:03:10.142068  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:03:10.142084  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:03:10.142099  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:03:10.142154  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
	W0111 08:03:10.142196  510536 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
	I0111 08:03:10.142209  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:03:10.142241  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
	I0111 08:03:10.142272  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:03:10.142300  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
	I0111 08:03:10.142362  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:03:10.142398  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.142416  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.142435  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.143004  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:03:10.162328  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:03:10.184581  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:03:10.205364  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:03:10.225605  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:03:10.244217  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:03:10.262318  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:03:10.280609  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:03:10.298945  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:03:10.317711  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
	I0111 08:03:10.337232  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
	I0111 08:03:10.355950  510536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:03:10.369827  510536 ssh_runner.go:195] Run: openssl version
	I0111 08:03:10.376367  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.384503  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:03:10.392677  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.396870  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.396985  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.438118  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:03:10.445811  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:03:10.453210  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.460886  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
	I0111 08:03:10.468116  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.472747  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.472823  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.514049  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:03:10.521615  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
	I0111 08:03:10.529704  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.537387  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
	I0111 08:03:10.545355  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.549343  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.549411  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.590601  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:03:10.598218  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:03:10.605617  510536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:03:10.609166  510536 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:03:10.609220  510536 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:03:10.609341  510536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0111 08:03:10.628830  510536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:03:10.640206  510536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:03:10.649415  510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:03:10.649480  510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:03:10.660656  510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:03:10.660677  510536 kubeadm.go:158] found existing configuration files:
	
	I0111 08:03:10.660739  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:03:10.670232  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:03:10.670316  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:03:10.678581  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:03:10.688924  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:03:10.688993  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:03:10.696341  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:03:10.704448  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:03:10.704518  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:03:10.712096  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:03:10.719777  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:03:10.719863  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:03:10.727911  510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:03:10.845766  510536 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:03:10.846305  510536 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:03:10.931360  510536 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:07:15.098226  510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:07:15.098261  510536 kubeadm.go:319] 
	I0111 08:07:15.098395  510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:07:15.103138  510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:07:15.103229  510536 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:07:15.103392  510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:07:15.103495  510536 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:07:15.103566  510536 kubeadm.go:319] OS: Linux
	I0111 08:07:15.103647  510536 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:07:15.103732  510536 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:07:15.103815  510536 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:07:15.103897  510536 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:07:15.103980  510536 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:07:15.104062  510536 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:07:15.104143  510536 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:07:15.104224  510536 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:07:15.104304  510536 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:07:15.104430  510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:07:15.104597  510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:07:15.104755  510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:07:15.104862  510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:07:15.108159  510536 out.go:252]   - Generating certificates and keys ...
	I0111 08:07:15.108298  510536 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:07:15.108388  510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:07:15.108475  510536 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:07:15.108582  510536 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:07:15.108652  510536 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:07:15.108742  510536 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:07:15.108832  510536 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:07:15.108986  510536 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:07:15.109071  510536 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:07:15.109237  510536 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:07:15.109320  510536 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:07:15.109403  510536 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:07:15.109483  510536 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:07:15.109555  510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:07:15.109634  510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:07:15.109706  510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:07:15.109788  510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:07:15.109867  510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:07:15.109933  510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:07:15.110023  510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:07:15.110091  510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:07:15.113314  510536 out.go:252]   - Booting up control plane ...
	I0111 08:07:15.113429  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:07:15.113518  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:07:15.113592  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:07:15.113703  510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:07:15.113801  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:07:15.113911  510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:07:15.114000  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:07:15.114043  510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:07:15.114178  510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:07:15.114288  510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:07:15.114363  510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233896s
	I0111 08:07:15.114372  510536 kubeadm.go:319] 
	I0111 08:07:15.114430  510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:07:15.114467  510536 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:07:15.114576  510536 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:07:15.114584  510536 kubeadm.go:319] 
	I0111 08:07:15.114691  510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:07:15.114727  510536 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:07:15.114763  510536 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W0111 08:07:15.114900  510536 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233896s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233896s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:07:15.114996  510536 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0111 08:07:15.116109  510536 kubeadm.go:319] 
	I0111 08:07:15.534570  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:07:15.548124  510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:07:15.548189  510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:07:15.556213  510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:07:15.556274  510536 kubeadm.go:158] found existing configuration files:
	
	I0111 08:07:15.556335  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:07:15.563912  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:07:15.563978  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:07:15.571080  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:07:15.578655  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:07:15.578729  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:07:15.586262  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:07:15.593982  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:07:15.594058  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:07:15.601473  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:07:15.609148  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:07:15.609220  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:07:15.616665  510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:07:15.736539  510536 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:07:15.737021  510536 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:07:15.804671  510536 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:11:17.452858  510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0111 08:11:17.452923  510536 kubeadm.go:319] 
	I0111 08:11:17.453044  510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:11:17.455493  510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:11:17.455552  510536 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:11:17.455655  510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:11:17.455726  510536 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:11:17.455771  510536 kubeadm.go:319] OS: Linux
	I0111 08:11:17.455821  510536 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:11:17.455882  510536 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:11:17.455934  510536 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:11:17.455990  510536 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:11:17.456045  510536 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:11:17.456098  510536 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:11:17.456174  510536 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:11:17.456250  510536 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:11:17.456309  510536 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:11:17.456404  510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:11:17.456555  510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:11:17.456685  510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:11:17.456751  510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:11:17.461728  510536 out.go:252]   - Generating certificates and keys ...
	I0111 08:11:17.461862  510536 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:11:17.461936  510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:11:17.462020  510536 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:11:17.462086  510536 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:11:17.462160  510536 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:11:17.462218  510536 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:11:17.462283  510536 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:11:17.462345  510536 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:11:17.462426  510536 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:11:17.462501  510536 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:11:17.462539  510536 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:11:17.462595  510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:11:17.462647  510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:11:17.462704  510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:11:17.462757  510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:11:17.462821  510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:11:17.462945  510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:11:17.463059  510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:11:17.463156  510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:11:17.466057  510536 out.go:252]   - Booting up control plane ...
	I0111 08:11:17.466204  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:11:17.466297  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:11:17.466399  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:11:17.466523  510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:11:17.466625  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:11:17.466736  510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:11:17.466873  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:11:17.466916  510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:11:17.467064  510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:11:17.467174  510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:11:17.467272  510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000843963s
	I0111 08:11:17.467285  510536 kubeadm.go:319] 
	I0111 08:11:17.467355  510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:11:17.467421  510536 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:11:17.467571  510536 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:11:17.467585  510536 kubeadm.go:319] 
	I0111 08:11:17.467699  510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:11:17.467740  510536 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:11:17.467774  510536 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:11:17.467797  510536 kubeadm.go:319] 
	I0111 08:11:17.467843  510536 kubeadm.go:403] duration metric: took 8m6.858627939s to StartCluster
	I0111 08:11:17.467883  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:11:17.467954  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:11:17.510393  510536 cri.go:96] found id: ""
	I0111 08:11:17.510436  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.510445  510536 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:11:17.510454  510536 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 08:11:17.510520  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:11:17.540055  510536 cri.go:96] found id: ""
	I0111 08:11:17.540090  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.540099  510536 logs.go:284] No container was found matching "etcd"
	I0111 08:11:17.540106  510536 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 08:11:17.540168  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:11:17.568990  510536 cri.go:96] found id: ""
	I0111 08:11:17.569063  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.569086  510536 logs.go:284] No container was found matching "coredns"
	I0111 08:11:17.569106  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:11:17.569199  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:11:17.598546  510536 cri.go:96] found id: ""
	I0111 08:11:17.598624  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.598647  510536 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:11:17.598667  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:11:17.598751  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:11:17.650676  510536 cri.go:96] found id: ""
	I0111 08:11:17.650750  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.650773  510536 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:11:17.650794  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:11:17.650928  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:11:17.684396  510536 cri.go:96] found id: ""
	I0111 08:11:17.684474  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.684505  510536 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:11:17.684527  510536 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 08:11:17.684636  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:11:17.745829  510536 cri.go:96] found id: ""
	I0111 08:11:17.745873  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.745883  510536 logs.go:284] No container was found matching "kindnet"
	I0111 08:11:17.745892  510536 logs.go:123] Gathering logs for kubelet ...
	I0111 08:11:17.745930  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:11:17.828347  510536 logs.go:123] Gathering logs for dmesg ...
	I0111 08:11:17.828383  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:11:17.850604  510536 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:11:17.850630  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:11:17.973516  510536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:11:17.963026    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.963926    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967222    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967572    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.969066    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:11:17.963026    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.963926    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967222    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967572    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.969066    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:11:17.973541  510536 logs.go:123] Gathering logs for Docker ...
	I0111 08:11:17.973554  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0111 08:11:18.001046  510536 logs.go:123] Gathering logs for container status ...
	I0111 08:11:18.001086  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:11:18.046288  510536 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:11:18.046406  510536 out.go:285] * 
	* 
	W0111 08:11:18.046610  510536 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:11:18.046721  510536 out.go:285] * 
	* 
	W0111 08:11:18.047132  510536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:11:18.052816  510536 out.go:203] 
	W0111 08:11:18.055641  510536 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:11:18.055919  510536 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:11:18.055975  510536 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:11:18.060639  510536 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-176470 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-11 08:11:18.761835576 +0000 UTC m=+2857.878215376
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-176470
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-176470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49",
	        "Created": "2026-01-11T08:02:59.486672984Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 510959,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:02:59.556542956Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/hostname",
	        "HostsPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/hosts",
	        "LogPath": "/var/lib/docker/containers/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49/5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49-json.log",
	        "Name": "/force-systemd-flag-176470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-176470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-176470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5c184c721f25fca51f626bddcfe59f674410f8078ca9875d5fd4ae7dd11e1f49",
	                "LowerDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3-init/diff:/var/lib/docker/overlay2/e4b3b3f7b2adc33a7ca49c4e0ccdd05f06b3e555556bac3db149fafb744bb371/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90aae16436fdea248948d1bc76c2767ca65cc482cbc13ecaac8eb594f4f461a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-176470",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-176470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-176470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-176470",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-176470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67148d5f8f89e37fb1e1a27a81118c30380e4084641fb450af1f67ff9a1f3fd2",
	            "SandboxKey": "/var/run/docker/netns/67148d5f8f89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33365"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33366"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33369"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33367"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33368"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-176470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:b9:27:f2:b4:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3b6e1c162656adf1a9d01bfef379c0d2c9e5a5a5e226c14f3fd4ba142242bf34",
	                    "EndpointID": "abcebaa1cf221454fc039923d19aa6e99abd10973f27618451a7214def396a85",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-176470",
	                        "5c184c721f25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-176470 -n force-systemd-flag-176470
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-176470 -n force-systemd-flag-176470: exit status 6 (501.868109ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:11:19.262903  524304 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-176470" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-176470 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-195160 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo containerd config dump                                                                                                                                                                                                        │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo crio config                                                                                                                                                                                                                   │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p cilium-195160                                                                                                                                                                                                                                    │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                                        │ force-systemd-env-081796  │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p NoKubernetes-616586                                                                                                                                                                                                                              │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                             │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ ssh     │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ stop    │ -p NoKubernetes-616586                                                                                                                                                                                                                              │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p NoKubernetes-616586 --driver=docker  --container-runtime=docker                                                                                                                                                                                  │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ ssh     │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p NoKubernetes-616586                                                                                                                                                                                                                              │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                       │ force-systemd-flag-176470 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ force-systemd-env-081796 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                 │ force-systemd-env-081796  │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ 11 Jan 26 08:10 UTC │
	│ delete  │ -p force-systemd-env-081796                                                                                                                                                                                                                         │ force-systemd-env-081796  │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ 11 Jan 26 08:10 UTC │
	│ start   │ -p docker-flags-747538 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ docker-flags-747538       │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │                     │
	│ ssh     │ force-systemd-flag-176470 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                │ force-systemd-flag-176470 │ jenkins │ v1.37.0 │ 11 Jan 26 08:11 UTC │ 11 Jan 26 08:11 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:10:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:10:49.343746  520924 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:10:49.343869  520924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:10:49.343878  520924 out.go:374] Setting ErrFile to fd 2...
	I0111 08:10:49.343883  520924 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:10:49.344129  520924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 08:10:49.344599  520924 out.go:368] Setting JSON to false
	I0111 08:10:49.345447  520924 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":10399,"bootTime":1768108650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 08:10:49.345514  520924 start.go:143] virtualization:  
	I0111 08:10:49.349240  520924 out.go:179] * [docker-flags-747538] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:10:49.353771  520924 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:10:49.353846  520924 notify.go:221] Checking for updates...
	I0111 08:10:49.361233  520924 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:10:49.364426  520924 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 08:10:49.367566  520924 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 08:10:49.370647  520924 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:10:49.373618  520924 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:10:49.377147  520924 config.go:182] Loaded profile config "force-systemd-flag-176470": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:10:49.377321  520924 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:10:49.397955  520924 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:10:49.398077  520924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:10:49.456806  520924 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:10:49.447340882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:10:49.456911  520924 docker.go:319] overlay module found
	I0111 08:10:49.460242  520924 out.go:179] * Using the docker driver based on user configuration
	I0111 08:10:49.463229  520924 start.go:309] selected driver: docker
	I0111 08:10:49.463247  520924 start.go:928] validating driver "docker" against <nil>
	I0111 08:10:49.463271  520924 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:10:49.464023  520924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:10:49.521667  520924 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:10:49.512130508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:10:49.521812  520924 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:10:49.522025  520924 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0111 08:10:49.525045  520924 out.go:179] * Using Docker driver with root privileges
	I0111 08:10:49.527974  520924 cni.go:84] Creating CNI manager for ""
	I0111 08:10:49.528063  520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:10:49.528077  520924 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0111 08:10:49.528167  520924 start.go:353] cluster config:
	{Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:10:49.533303  520924 out.go:179] * Starting "docker-flags-747538" primary control-plane node in "docker-flags-747538" cluster
	I0111 08:10:49.536247  520924 cache.go:134] Beginning downloading kic base image for docker with docker
	I0111 08:10:49.539333  520924 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:10:49.542210  520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:10:49.542260  520924 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0111 08:10:49.542272  520924 cache.go:65] Caching tarball of preloaded images
	I0111 08:10:49.542271  520924 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:10:49.542384  520924 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:10:49.542395  520924 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0111 08:10:49.542534  520924 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json ...
	I0111 08:10:49.542567  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json: {Name:mkddbc35a2b6012f37ba90ab45436ce25557e0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:10:49.560693  520924 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:10:49.560716  520924 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:10:49.560736  520924 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:10:49.560772  520924 start.go:360] acquireMachinesLock for docker-flags-747538: {Name:mk3014c19513dad4e5876bfc3cf028bc21b9e961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:10:49.560888  520924 start.go:364] duration metric: took 94.857µs to acquireMachinesLock for "docker-flags-747538"
	I0111 08:10:49.560917  520924 start.go:93] Provisioning new machine with config: &{Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0111 08:10:49.560998  520924 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:10:49.564432  520924 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:10:49.564663  520924 start.go:159] libmachine.API.Create for "docker-flags-747538" (driver="docker")
	I0111 08:10:49.564701  520924 client.go:173] LocalClient.Create starting
	I0111 08:10:49.564798  520924 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
	I0111 08:10:49.564837  520924 main.go:144] libmachine: Decoding PEM data...
	I0111 08:10:49.564857  520924 main.go:144] libmachine: Parsing certificate...
	I0111 08:10:49.564912  520924 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
	I0111 08:10:49.564934  520924 main.go:144] libmachine: Decoding PEM data...
	I0111 08:10:49.564945  520924 main.go:144] libmachine: Parsing certificate...
	I0111 08:10:49.565319  520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:10:49.581266  520924 cli_runner.go:211] docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:10:49.581354  520924 network_create.go:284] running [docker network inspect docker-flags-747538] to gather additional debugging logs...
	I0111 08:10:49.581376  520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538
	W0111 08:10:49.597247  520924 cli_runner.go:211] docker network inspect docker-flags-747538 returned with exit code 1
	I0111 08:10:49.597292  520924 network_create.go:287] error running [docker network inspect docker-flags-747538]: docker network inspect docker-flags-747538: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-747538 not found
	I0111 08:10:49.597305  520924 network_create.go:289] output of [docker network inspect docker-flags-747538]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-747538 not found
	
	** /stderr **
	I0111 08:10:49.597400  520924 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:10:49.614104  520924 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
	I0111 08:10:49.614510  520924 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
	I0111 08:10:49.614741  520924 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
	I0111 08:10:49.615097  520924 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3b6e1c162656 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:94:9f:06:02:24} reservation:<nil>}
	I0111 08:10:49.615547  520924 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a08cd0}
	I0111 08:10:49.615569  520924 network_create.go:124] attempt to create docker network docker-flags-747538 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 08:10:49.615625  520924 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-747538 docker-flags-747538
	I0111 08:10:49.667963  520924 network_create.go:108] docker network docker-flags-747538 192.168.85.0/24 created
	I0111 08:10:49.667997  520924 kic.go:121] calculated static IP "192.168.85.2" for the "docker-flags-747538" container
	I0111 08:10:49.668089  520924 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:10:49.688516  520924 cli_runner.go:164] Run: docker volume create docker-flags-747538 --label name.minikube.sigs.k8s.io=docker-flags-747538 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:10:49.706687  520924 oci.go:103] Successfully created a docker volume docker-flags-747538
	I0111 08:10:49.706780  520924 cli_runner.go:164] Run: docker run --rm --name docker-flags-747538-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-747538 --entrypoint /usr/bin/test -v docker-flags-747538:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:10:50.229477  520924 oci.go:107] Successfully prepared a docker volume docker-flags-747538
	I0111 08:10:50.229555  520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:10:50.229566  520924 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:10:50.229636  520924 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-747538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:10:53.570959  520924 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-747538:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.341275255s)
	I0111 08:10:53.570998  520924 kic.go:203] duration metric: took 3.341428014s to extract preloaded images to volume ...
	W0111 08:10:53.571151  520924 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:10:53.571262  520924 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:10:53.626189  520924 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-747538 --name docker-flags-747538 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-747538 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-747538 --network docker-flags-747538 --ip 192.168.85.2 --volume docker-flags-747538:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:10:53.939121  520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Running}}
	I0111 08:10:53.966213  520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
	I0111 08:10:53.992131  520924 cli_runner.go:164] Run: docker exec docker-flags-747538 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:10:54.053280  520924 oci.go:144] the created container "docker-flags-747538" has a running status.
	I0111 08:10:54.053308  520924 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa...
	I0111 08:10:54.269598  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:10:54.269707  520924 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:10:54.317630  520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
	I0111 08:10:54.344650  520924 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:10:54.344668  520924 kic_runner.go:114] Args: [docker exec --privileged docker-flags-747538 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:10:54.430625  520924 cli_runner.go:164] Run: docker container inspect docker-flags-747538 --format={{.State.Status}}
	I0111 08:10:54.453352  520924 machine.go:94] provisionDockerMachine start ...
	I0111 08:10:54.453451  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:54.488754  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:54.489084  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:54.489098  520924 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:10:54.489813  520924 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53496->127.0.0.1:33370: read: connection reset by peer
	I0111 08:10:57.638319  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-747538
	
	I0111 08:10:57.638342  520924 ubuntu.go:182] provisioning hostname "docker-flags-747538"
	I0111 08:10:57.638413  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:57.655990  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:57.656366  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:57.656385  520924 main.go:144] libmachine: About to run SSH command:
	sudo hostname docker-flags-747538 && echo "docker-flags-747538" | sudo tee /etc/hostname
	I0111 08:10:57.812662  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-747538
	
	I0111 08:10:57.812781  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:57.830892  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:57.831210  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:57.831226  520924 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-747538' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-747538/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-747538' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:10:57.979180  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:10:57.979206  520924 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
	I0111 08:10:57.979230  520924 ubuntu.go:190] setting up certificates
	I0111 08:10:57.979246  520924 provision.go:84] configureAuth start
	I0111 08:10:57.979308  520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
	I0111 08:10:57.996589  520924 provision.go:143] copyHostCerts
	I0111 08:10:57.996636  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:10:57.996669  520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
	I0111 08:10:57.996682  520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:10:57.996762  520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
	I0111 08:10:57.996856  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:10:57.996877  520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
	I0111 08:10:57.996882  520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:10:57.996909  520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
	I0111 08:10:57.996962  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:10:57.996980  520924 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
	I0111 08:10:57.996985  520924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:10:57.997014  520924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
	I0111 08:10:57.997074  520924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.docker-flags-747538 san=[127.0.0.1 192.168.85.2 docker-flags-747538 localhost minikube]
	I0111 08:10:58.536454  520924 provision.go:177] copyRemoteCerts
	I0111 08:10:58.536520  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:10:58.536558  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:58.553113  520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
	I0111 08:10:58.659185  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:10:58.659270  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 08:10:58.676842  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:10:58.676916  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0111 08:10:58.694011  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:10:58.694072  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:10:58.711713  520924 provision.go:87] duration metric: took 732.442991ms to configureAuth
	I0111 08:10:58.711785  520924 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:10:58.712005  520924 config.go:182] Loaded profile config "docker-flags-747538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:10:58.712103  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:58.729751  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:58.730981  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:58.730999  520924 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0111 08:10:58.888341  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0111 08:10:58.888364  520924 ubuntu.go:71] root file system type: overlay
	I0111 08:10:58.888485  520924 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0111 08:10:58.888568  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:58.910757  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:58.912196  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:58.912292  520924 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0111 08:10:59.076562  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0111 08:10:59.076660  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:10:59.094332  520924 main.go:144] libmachine: Using SSH client type: native
	I0111 08:10:59.094655  520924 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33370 <nil> <nil>}
	I0111 08:10:59.094672  520924 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0111 08:11:00.397732  520924 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2026-01-08 19:56:21.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-11 08:10:59.072052587 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=FOO=BAR
	+Environment=BAZ=BAT
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0111 08:11:00.397773  520924 machine.go:97] duration metric: took 5.944399319s to provisionDockerMachine
	I0111 08:11:00.397786  520924 client.go:176] duration metric: took 10.833073406s to LocalClient.Create
	I0111 08:11:00.397813  520924 start.go:167] duration metric: took 10.833150401s to libmachine.API.Create "docker-flags-747538"
	I0111 08:11:00.397824  520924 start.go:293] postStartSetup for "docker-flags-747538" (driver="docker")
	I0111 08:11:00.397840  520924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:11:00.397936  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:11:00.397995  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:11:00.425747  520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
	I0111 08:11:00.535501  520924 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:11:00.540367  520924 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:11:00.540395  520924 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:11:00.540432  520924 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
	I0111 08:11:00.540502  520924 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
	I0111 08:11:00.540587  520924 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
	I0111 08:11:00.540599  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
	I0111 08:11:00.540703  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:11:00.548481  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:11:00.566435  520924 start.go:296] duration metric: took 168.589908ms for postStartSetup
	I0111 08:11:00.566863  520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
	I0111 08:11:00.584575  520924 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/config.json ...
	I0111 08:11:00.584960  520924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:11:00.585010  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:11:00.602049  520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
	I0111 08:11:00.703790  520924 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:11:00.708694  520924 start.go:128] duration metric: took 11.14768044s to createHost
	I0111 08:11:00.708719  520924 start.go:83] releasing machines lock for "docker-flags-747538", held for 11.147818906s
	I0111 08:11:00.708791  520924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-747538
	I0111 08:11:00.725687  520924 ssh_runner.go:195] Run: cat /version.json
	I0111 08:11:00.725741  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:11:00.725999  520924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:11:00.726060  520924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-747538
	I0111 08:11:00.744272  520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
	I0111 08:11:00.752842  520924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/docker-flags-747538/id_rsa Username:docker}
	I0111 08:11:00.846460  520924 ssh_runner.go:195] Run: systemctl --version
	I0111 08:11:00.948121  520924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:11:00.953117  520924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:11:00.953202  520924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:11:00.980681  520924 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:11:00.980757  520924 start.go:496] detecting cgroup driver to use...
	I0111 08:11:00.980804  520924 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:11:00.980954  520924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:11:00.995756  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:11:01.005670  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:11:01.015303  520924 containerd.go:147] configuring containerd to use "cgroupfs" as cgroup driver...
	I0111 08:11:01.015382  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0111 08:11:01.024678  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:01.033807  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:11:01.042636  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:11:01.051691  520924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:11:01.059768  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:11:01.068779  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:11:01.077853  520924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:11:01.086986  520924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:11:01.094918  520924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:11:01.103076  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:01.244916  520924 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:11:01.325753  520924 start.go:496] detecting cgroup driver to use...
	I0111 08:11:01.325806  520924 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0111 08:11:01.325859  520924 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0111 08:11:01.341220  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:11:01.354665  520924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:11:01.381050  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:11:01.394034  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:11:01.407512  520924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:11:01.422626  520924 ssh_runner.go:195] Run: which cri-dockerd
	I0111 08:11:01.426500  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0111 08:11:01.434184  520924 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0111 08:11:01.447582  520924 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0111 08:11:01.565617  520924 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0111 08:11:01.688439  520924 docker.go:578] configuring docker to use "cgroupfs" as cgroup driver...
	I0111 08:11:01.688573  520924 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0111 08:11:01.703071  520924 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0111 08:11:01.717759  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:01.841223  520924 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0111 08:11:02.331482  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:11:02.348529  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0111 08:11:02.363545  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:11:02.379991  520924 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0111 08:11:02.515509  520924 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0111 08:11:02.647213  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:02.769736  520924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0111 08:11:02.785421  520924 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0111 08:11:02.798179  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:02.912586  520924 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0111 08:11:02.978874  520924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:11:02.994139  520924 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0111 08:11:02.994337  520924 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0111 08:11:02.998727  520924 start.go:574] Will wait 60s for crictl version
	I0111 08:11:02.998979  520924 ssh_runner.go:195] Run: which crictl
	I0111 08:11:03.003497  520924 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:11:03.032157  520924 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.4
	RuntimeApiVersion:  v1
	I0111 08:11:03.032272  520924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:11:03.055420  520924 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:11:03.083794  520924 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
	I0111 08:11:03.086636  520924 out.go:179]   - opt debug
	I0111 08:11:03.089617  520924 out.go:179]   - opt icc=true
	I0111 08:11:03.092443  520924 out.go:179]   - env FOO=BAR
	I0111 08:11:03.095378  520924 out.go:179]   - env BAZ=BAT
	I0111 08:11:03.098244  520924 cli_runner.go:164] Run: docker network inspect docker-flags-747538 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:11:03.114929  520924 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:11:03.118888  520924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:03.128929  520924 kubeadm.go:884] updating cluster {Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:11:03.129053  520924 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:11:03.129109  520924 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:11:03.154307  520924 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:11:03.154330  520924 docker.go:624] Images already preloaded, skipping extraction
	I0111 08:11:03.154341  520924 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0111 08:11:03.154444  520924 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=docker-flags-747538 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0111 08:11:03.154512  520924 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0111 08:11:03.206688  520924 cni.go:84] Creating CNI manager for ""
	I0111 08:11:03.206714  520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:11:03.206740  520924 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:11:03.206765  520924 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:docker-flags-747538 NodeName:docker-flags-747538 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:11:03.206922  520924 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "docker-flags-747538"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:11:03.207034  520924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:11:03.215311  520924 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:11:03.215405  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:11:03.223392  520924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0111 08:11:03.236933  520924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:11:03.249661  520924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2240 bytes)
	I0111 08:11:03.263019  520924 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:11:03.266942  520924 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:11:03.277694  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:11:03.403482  520924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:11:03.421836  520924 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538 for IP: 192.168.85.2
	I0111 08:11:03.421859  520924 certs.go:195] generating shared ca certs ...
	I0111 08:11:03.421875  520924 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:03.422027  520924 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
	I0111 08:11:03.422080  520924 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
	I0111 08:11:03.422092  520924 certs.go:257] generating profile certs ...
	I0111 08:11:03.422147  520924 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key
	I0111 08:11:03.422162  520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt with IP's: []
	I0111 08:11:03.539758  520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt ...
	I0111 08:11:03.539790  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.crt: {Name:mk270fca02964aa29f311e366014d5733f531228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:03.539993  520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key ...
	I0111 08:11:03.540009  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/client.key: {Name:mke5998452e96841a984b78967db750f062a137a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:03.540105  520924 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb
	I0111 08:11:03.540122  520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 08:11:03.941138  520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb ...
	I0111 08:11:03.941177  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb: {Name:mk30ca1e7aba90138dc745c3c5f0b7897bf7938f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:03.941382  520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb ...
	I0111 08:11:03.941399  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb: {Name:mkfcc2878fce37bf3bf735da13ccc68e9427f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:03.941487  520924 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt.5e2239bb -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt
	I0111 08:11:03.941572  520924 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key.5e2239bb -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key
	I0111 08:11:03.941633  520924 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key
	I0111 08:11:03.941651  520924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt with IP's: []
	I0111 08:11:04.202994  520924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt ...
	I0111 08:11:04.203038  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt: {Name:mka203b1581fea2d77db09fdd4dc7dfae878c175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:04.203218  520924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key ...
	I0111 08:11:04.203235  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key: {Name:mkf9d6bd09f6248861b5dcd70a3546265c71546f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:04.203311  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:11:04.203340  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:11:04.203357  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:11:04.203374  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:11:04.203385  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:11:04.203401  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:11:04.203412  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:11:04.203428  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:11:04.203484  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
	W0111 08:11:04.203526  520924 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
	I0111 08:11:04.203539  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:11:04.203570  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
	I0111 08:11:04.203601  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:11:04.203629  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
	I0111 08:11:04.203677  520924 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:11:04.203715  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
	I0111 08:11:04.203741  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:04.203757  520924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
	I0111 08:11:04.204350  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:11:04.222456  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:11:04.240610  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:11:04.258784  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:11:04.276911  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0111 08:11:04.296921  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0111 08:11:04.315427  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:11:04.333328  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/docker-flags-747538/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0111 08:11:04.355116  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
	I0111 08:11:04.373870  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:11:04.391860  520924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
	I0111 08:11:04.410365  520924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:11:04.423690  520924 ssh_runner.go:195] Run: openssl version
	I0111 08:11:04.430385  520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
	I0111 08:11:04.438057  520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
	I0111 08:11:04.445923  520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
	I0111 08:11:04.450333  520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
	I0111 08:11:04.450447  520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
	I0111 08:11:04.491674  520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:04.499247  520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:11:04.506933  520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:04.514584  520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:11:04.522445  520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:04.526347  520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:04.526430  520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:11:04.567441  520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:11:04.575040  520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:11:04.582544  520924 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
	I0111 08:11:04.589968  520924 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
	I0111 08:11:04.597888  520924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
	I0111 08:11:04.601813  520924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
	I0111 08:11:04.601881  520924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
	I0111 08:11:04.643564  520924 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:11:04.651200  520924 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
	I0111 08:11:04.658573  520924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:11:04.662081  520924 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:11:04.662166  520924 kubeadm.go:401] StartCluster: {Name:docker-flags-747538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-747538 Namespace:default APIServerHAVIP
: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:11:04.662303  520924 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0111 08:11:04.680219  520924 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:11:04.687967  520924 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:11:04.695818  520924 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:11:04.695886  520924 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:11:04.703848  520924 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:11:04.703871  520924 kubeadm.go:158] found existing configuration files:
	
	I0111 08:11:04.703947  520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:11:04.711621  520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:11:04.711694  520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:11:04.719183  520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:11:04.726885  520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:11:04.726993  520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:11:04.734592  520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:11:04.742664  520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:11:04.742761  520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:11:04.750405  520924 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:11:04.758314  520924 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:11:04.758397  520924 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:11:04.766585  520924 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:11:04.808924  520924 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:11:04.809139  520924 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:11:04.886354  520924 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:11:04.886440  520924 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:11:04.886480  520924 kubeadm.go:319] OS: Linux
	I0111 08:11:04.886547  520924 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:11:04.886600  520924 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:11:04.886651  520924 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:11:04.886703  520924 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:11:04.886755  520924 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:11:04.886805  520924 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:11:04.886891  520924 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:11:04.886943  520924 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:11:04.887000  520924 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:11:04.973460  520924 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:11:04.973579  520924 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:11:04.973675  520924 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:11:04.995257  520924 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:11:04.999399  520924 out.go:252]   - Generating certificates and keys ...
	I0111 08:11:04.999510  520924 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:11:04.999582  520924 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:11:05.160232  520924 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:11:05.498105  520924 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:11:05.828651  520924 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:11:06.119780  520924 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:11:06.615562  520924 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:11:06.615743  520924 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [docker-flags-747538 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:11:06.923732  520924 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:11:06.924188  520924 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [docker-flags-747538 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:11:07.177014  520924 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:11:07.304847  520924 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:11:07.751910  520924 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:11:07.752226  520924 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:11:08.130275  520924 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:11:08.358093  520924 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:11:08.565532  520924 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:11:08.889031  520924 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:11:09.120333  520924 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:11:09.120967  520924 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:11:09.124514  520924 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:11:09.128211  520924 out.go:252]   - Booting up control plane ...
	I0111 08:11:09.128321  520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:11:09.128399  520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:11:09.128878  520924 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:11:09.145487  520924 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:11:09.145807  520924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:11:09.154175  520924 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:11:09.154480  520924 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:11:09.154699  520924 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:11:09.288104  520924 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:11:09.288226  520924 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:11:11.288730  520924 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.000960503s
	I0111 08:11:11.292185  520924 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0111 08:11:11.292281  520924 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0111 08:11:11.292370  520924 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0111 08:11:11.292705  520924 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0111 08:11:13.307884  520924 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.015247141s
	I0111 08:11:15.360506  520924 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.068270586s
	I0111 08:11:17.293986  520924 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001534258s
	I0111 08:11:17.339991  520924 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0111 08:11:17.359003  520924 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0111 08:11:17.375748  520924 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0111 08:11:17.375950  520924 kubeadm.go:319] [mark-control-plane] Marking the node docker-flags-747538 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0111 08:11:17.390234  520924 kubeadm.go:319] [bootstrap-token] Using token: bwyv57.9wca10o27ezxy0ff
	I0111 08:11:17.452858  510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0111 08:11:17.452923  510536 kubeadm.go:319] 
	I0111 08:11:17.453044  510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:11:17.455493  510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:11:17.455552  510536 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:11:17.455655  510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:11:17.455726  510536 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:11:17.455771  510536 kubeadm.go:319] OS: Linux
	I0111 08:11:17.455821  510536 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:11:17.455882  510536 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:11:17.455934  510536 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:11:17.455990  510536 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:11:17.456045  510536 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:11:17.456098  510536 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:11:17.456174  510536 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:11:17.456250  510536 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:11:17.456309  510536 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:11:17.456404  510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:11:17.456555  510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:11:17.456685  510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:11:17.456751  510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:11:17.461728  510536 out.go:252]   - Generating certificates and keys ...
	I0111 08:11:17.461862  510536 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:11:17.461936  510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:11:17.462020  510536 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:11:17.462086  510536 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:11:17.462160  510536 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:11:17.462218  510536 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:11:17.462283  510536 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:11:17.462345  510536 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:11:17.462426  510536 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:11:17.462501  510536 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:11:17.462539  510536 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:11:17.462595  510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:11:17.462647  510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:11:17.462704  510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:11:17.462757  510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:11:17.462821  510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:11:17.462945  510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:11:17.463059  510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:11:17.463156  510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:11:17.466057  510536 out.go:252]   - Booting up control plane ...
	I0111 08:11:17.466204  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:11:17.466297  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:11:17.466399  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:11:17.466523  510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:11:17.466625  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:11:17.466736  510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:11:17.466873  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:11:17.466916  510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:11:17.467064  510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:11:17.467174  510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:11:17.467272  510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000843963s
	I0111 08:11:17.467285  510536 kubeadm.go:319] 
	I0111 08:11:17.467355  510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:11:17.467421  510536 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:11:17.467571  510536 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:11:17.467585  510536 kubeadm.go:319] 
	I0111 08:11:17.467699  510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:11:17.467740  510536 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:11:17.467774  510536 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:11:17.467797  510536 kubeadm.go:319] 
	I0111 08:11:17.467843  510536 kubeadm.go:403] duration metric: took 8m6.858627939s to StartCluster
	I0111 08:11:17.467883  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:11:17.467954  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:11:17.510393  510536 cri.go:96] found id: ""
	I0111 08:11:17.510436  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.510445  510536 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:11:17.510454  510536 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 08:11:17.510520  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:11:17.540055  510536 cri.go:96] found id: ""
	I0111 08:11:17.540090  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.540099  510536 logs.go:284] No container was found matching "etcd"
	I0111 08:11:17.540106  510536 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 08:11:17.540168  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:11:17.568990  510536 cri.go:96] found id: ""
	I0111 08:11:17.569063  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.569086  510536 logs.go:284] No container was found matching "coredns"
	I0111 08:11:17.569106  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:11:17.569199  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:11:17.598546  510536 cri.go:96] found id: ""
	I0111 08:11:17.598624  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.598647  510536 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:11:17.598667  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:11:17.598751  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:11:17.650676  510536 cri.go:96] found id: ""
	I0111 08:11:17.650750  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.650773  510536 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:11:17.650794  510536 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:11:17.650928  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:11:17.684396  510536 cri.go:96] found id: ""
	I0111 08:11:17.684474  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.684505  510536 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:11:17.684527  510536 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 08:11:17.684636  510536 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:11:17.745829  510536 cri.go:96] found id: ""
	I0111 08:11:17.745873  510536 logs.go:282] 0 containers: []
	W0111 08:11:17.745883  510536 logs.go:284] No container was found matching "kindnet"
	I0111 08:11:17.745892  510536 logs.go:123] Gathering logs for kubelet ...
	I0111 08:11:17.745930  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:11:17.828347  510536 logs.go:123] Gathering logs for dmesg ...
	I0111 08:11:17.828383  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:11:17.850604  510536 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:11:17.850630  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:11:17.973516  510536 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:11:17.963026    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.963926    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967222    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967572    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.969066    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:11:17.963026    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.963926    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967222    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.967572    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:17.969066    5596 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:11:17.973541  510536 logs.go:123] Gathering logs for Docker ...
	I0111 08:11:17.973554  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0111 08:11:18.001046  510536 logs.go:123] Gathering logs for container status ...
	I0111 08:11:18.001086  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0111 08:11:18.046288  510536 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:11:18.046406  510536 out.go:285] * 
	W0111 08:11:18.046610  510536 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:11:18.046721  510536 out.go:285] * 
	W0111 08:11:18.047132  510536 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:11:18.052816  510536 out.go:203] 
	W0111 08:11:18.055641  510536 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000843963s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:11:18.055919  510536 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:11:18.055975  510536 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:11:18.060639  510536 out.go:203] 
	I0111 08:11:17.393179  520924 out.go:252]   - Configuring RBAC rules ...
	I0111 08:11:17.393307  520924 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0111 08:11:17.397444  520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0111 08:11:17.407118  520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0111 08:11:17.412368  520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0111 08:11:17.417190  520924 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0111 08:11:17.423219  520924 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0111 08:11:17.702432  520924 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0111 08:11:18.163794  520924 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0111 08:11:18.702151  520924 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0111 08:11:18.703799  520924 kubeadm.go:319] 
	I0111 08:11:18.703882  520924 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0111 08:11:18.703888  520924 kubeadm.go:319] 
	I0111 08:11:18.703965  520924 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0111 08:11:18.703969  520924 kubeadm.go:319] 
	I0111 08:11:18.703994  520924 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0111 08:11:18.704465  520924 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0111 08:11:18.704530  520924 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0111 08:11:18.704535  520924 kubeadm.go:319] 
	I0111 08:11:18.704589  520924 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0111 08:11:18.704592  520924 kubeadm.go:319] 
	I0111 08:11:18.704640  520924 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0111 08:11:18.704643  520924 kubeadm.go:319] 
	I0111 08:11:18.704701  520924 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0111 08:11:18.704776  520924 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0111 08:11:18.704844  520924 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0111 08:11:18.704850  520924 kubeadm.go:319] 
	I0111 08:11:18.705164  520924 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0111 08:11:18.705247  520924 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0111 08:11:18.705251  520924 kubeadm.go:319] 
	I0111 08:11:18.705546  520924 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bwyv57.9wca10o27ezxy0ff \
	I0111 08:11:18.705654  520924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:818ce15c86fa4793707dcde7e618897f3968773ad82953fed09116f6b0602c24 \
	I0111 08:11:18.705876  520924 kubeadm.go:319] 	--control-plane 
	I0111 08:11:18.705885  520924 kubeadm.go:319] 
	I0111 08:11:18.706157  520924 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0111 08:11:18.706166  520924 kubeadm.go:319] 
	I0111 08:11:18.706472  520924 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bwyv57.9wca10o27ezxy0ff \
	I0111 08:11:18.706758  520924 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:818ce15c86fa4793707dcde7e618897f3968773ad82953fed09116f6b0602c24 
	I0111 08:11:18.713780  520924 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:11:18.714408  520924 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:11:18.714535  520924 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:11:18.714546  520924 cni.go:84] Creating CNI manager for ""
	I0111 08:11:18.714560  520924 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:11:18.718177  520924 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0111 08:11:18.721094  520924 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0111 08:11:18.795763  520924 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0111 08:11:18.840500  520924 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0111 08:11:18.840628  520924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0111 08:11:18.840704  520924 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes docker-flags-747538 minikube.k8s.io/updated_at=2026_01_11T08_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=4473aa4ffaa416872fe849e19c0ce3dabca02c04 minikube.k8s.io/name=docker-flags-747538 minikube.k8s.io/primary=true
	I0111 08:11:19.064744  520924 ops.go:34] apiserver oom_adj: -16
	I0111 08:11:19.064755  520924 kubeadm.go:1114] duration metric: took 224.176624ms to wait for elevateKubeSystemPrivileges
	I0111 08:11:19.064780  520924 kubeadm.go:403] duration metric: took 14.402621199s to StartCluster
	I0111 08:11:19.064804  520924 settings.go:142] acquiring lock: {Name:mk2450911e4e3da6233070d23405462f9cda31b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:19.064878  520924 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 08:11:19.065488  520924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/kubeconfig: {Name:mk23bbe94b13868b5365bf437bc6e69ac4646cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:11:19.065714  520924 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0111 08:11:19.065715  520924 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0111 08:11:19.065976  520924 config.go:182] Loaded profile config "docker-flags-747538": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:11:19.068850  520924 out.go:179] * Verifying Kubernetes components...
	I0111 08:11:19.071842  520924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	
	
	==> Docker <==
	Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.828330129Z" level=info msg="Restoring containers: start."
	Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.847273714Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 11 08:03:07 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:07.863311887Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.073563281Z" level=info msg="Loading containers: done."
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.084984648Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.085054701Z" level=info msg="Docker daemon" commit=08440b6 containerd-snapshotter=false storage-driver=overlay2 version=29.1.4
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.085108271Z" level=info msg="Initializing buildkit"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.105098506Z" level=info msg="Completed buildkit initialization"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.110602123Z" level=info msg="Daemon has completed initialization"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.110777699Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.112617910Z" level=info msg="API listen on /run/docker.sock"
	Jan 11 08:03:08 force-systemd-flag-176470 dockerd[1142]: time="2026-01-11T08:03:08.112699787Z" level=info msg="API listen on [::]:2376"
	Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Start docker client with request timeout 0s"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Loaded network plugin cni"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Setting cgroupDriver systemd"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 11 08:03:08 force-systemd-flag-176470 cri-dockerd[1424]: time="2026-01-11T08:03:08Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 11 08:03:08 force-systemd-flag-176470 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:11:20.076993    5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:20.078022    5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:20.079887    5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:20.080435    5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:11:20.082060    5746 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan11 06:45] overlayfs: idmapped layers are currently not supported
	[Jan11 06:46] overlayfs: idmapped layers are currently not supported
	[Jan11 06:47] overlayfs: idmapped layers are currently not supported
	[Jan11 06:56] overlayfs: idmapped layers are currently not supported
	[  +5.181200] overlayfs: idmapped layers are currently not supported
	[Jan11 07:00] overlayfs: idmapped layers are currently not supported
	[Jan11 07:01] overlayfs: idmapped layers are currently not supported
	[Jan11 07:06] overlayfs: idmapped layers are currently not supported
	[Jan11 07:07] overlayfs: idmapped layers are currently not supported
	[Jan11 07:08] overlayfs: idmapped layers are currently not supported
	[Jan11 07:09] overlayfs: idmapped layers are currently not supported
	[ +36.684603] overlayfs: idmapped layers are currently not supported
	[Jan11 07:10] overlayfs: idmapped layers are currently not supported
	[Jan11 07:11] overlayfs: idmapped layers are currently not supported
	[Jan11 07:12] overlayfs: idmapped layers are currently not supported
	[ +18.034227] overlayfs: idmapped layers are currently not supported
	[Jan11 07:13] overlayfs: idmapped layers are currently not supported
	[Jan11 07:14] overlayfs: idmapped layers are currently not supported
	[Jan11 07:15] overlayfs: idmapped layers are currently not supported
	[ +23.411747] overlayfs: idmapped layers are currently not supported
	[Jan11 07:16] overlayfs: idmapped layers are currently not supported
	[ +26.028245] overlayfs: idmapped layers are currently not supported
	[Jan11 07:17] overlayfs: idmapped layers are currently not supported
	[Jan11 07:18] overlayfs: idmapped layers are currently not supported
	[Jan11 07:23] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:11:20 up  2:53,  0 user,  load average: 2.83, 1.43, 1.95
	Linux force-systemd-flag-176470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 08:11:16 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:17 force-systemd-flag-176470 kubelet[5567]: E0111 08:11:17.722676    5567 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:11:17 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:18 force-systemd-flag-176470 kubelet[5615]: E0111 08:11:18.475424    5615 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:11:18 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:19 force-systemd-flag-176470 kubelet[5661]: E0111 08:11:19.235128    5661 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:11:19 force-systemd-flag-176470 kubelet[5728]: E0111 08:11:19.950698    5728 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:11:19 force-systemd-flag-176470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-176470 -n force-systemd-flag-176470
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-176470 -n force-systemd-flag-176470: exit status 6 (368.205038ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:11:20.564716  524577 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-176470" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-176470" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-176470" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-176470
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-176470: (2.039331365s)
--- FAIL: TestForceSystemdFlag (507.47s)

                                                
                                    
x
+
TestForceSystemdEnv (506.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m23.257100407s)

                                                
                                                
-- stdout --
	* [force-systemd-env-081796] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-081796" primary control-plane node in "force-systemd-env-081796" cluster
	* Pulling base image v0.0.48-1768032998-22402 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 08:02:22.481043  501966 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:02:22.484723  501966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:22.484743  501966 out.go:374] Setting ErrFile to fd 2...
	I0111 08:02:22.484750  501966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:22.485047  501966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 08:02:22.485576  501966 out.go:368] Setting JSON to false
	I0111 08:02:22.486488  501966 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9892,"bootTime":1768108650,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 08:02:22.486556  501966 start.go:143] virtualization:  
	I0111 08:02:22.489939  501966 out.go:179] * [force-systemd-env-081796] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:02:22.493981  501966 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:02:22.494200  501966 notify.go:221] Checking for updates...
	I0111 08:02:22.501862  501966 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:02:22.504879  501966 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 08:02:22.508623  501966 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 08:02:22.512592  501966 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:02:22.516667  501966 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0111 08:02:22.521881  501966 config.go:182] Loaded profile config "NoKubernetes-616586": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0111 08:02:22.522072  501966 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:02:22.585764  501966 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:02:22.585953  501966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:22.742450  501966 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 08:02:22.723671311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:22.742566  501966 docker.go:319] overlay module found
	I0111 08:02:22.745770  501966 out.go:179] * Using the docker driver based on user configuration
	I0111 08:02:22.748664  501966 start.go:309] selected driver: docker
	I0111 08:02:22.748688  501966 start.go:928] validating driver "docker" against <nil>
	I0111 08:02:22.748703  501966 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:02:22.749406  501966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:22.856854  501966 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 08:02:22.8436946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:22.857008  501966 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:02:22.857224  501966 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:02:22.860436  501966 out.go:179] * Using Docker driver with root privileges
	I0111 08:02:22.863575  501966 cni.go:84] Creating CNI manager for ""
	I0111 08:02:22.863663  501966 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:02:22.863684  501966 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0111 08:02:22.863771  501966 start.go:353] cluster config:
	{Name:force-systemd-env-081796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-081796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:02:22.866909  501966 out.go:179] * Starting "force-systemd-env-081796" primary control-plane node in "force-systemd-env-081796" cluster
	I0111 08:02:22.869714  501966 cache.go:134] Beginning downloading kic base image for docker with docker
	I0111 08:02:22.872719  501966 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:02:22.875836  501966 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:22.875889  501966 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0111 08:02:22.875899  501966 cache.go:65] Caching tarball of preloaded images
	I0111 08:02:22.875988  501966 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:02:22.876003  501966 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0111 08:02:22.876114  501966 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/config.json ...
	I0111 08:02:22.876143  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/config.json: {Name:mkee487da58c8771951faa537c4a0f7f07b89fff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:22.876317  501966 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:02:22.903517  501966 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:02:22.903538  501966 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:02:22.903554  501966 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:02:22.903586  501966 start.go:360] acquireMachinesLock for force-systemd-env-081796: {Name:mk8d8249b83db3da5a89f1b7b7decec3f39e0966 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:02:22.903684  501966 start.go:364] duration metric: took 83.78µs to acquireMachinesLock for "force-systemd-env-081796"
	I0111 08:02:22.903710  501966 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-081796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-081796 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0111 08:02:22.903778  501966 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:02:22.907590  501966 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:02:22.907824  501966 start.go:159] libmachine.API.Create for "force-systemd-env-081796" (driver="docker")
	I0111 08:02:22.907853  501966 client.go:173] LocalClient.Create starting
	I0111 08:02:22.907922  501966 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
	I0111 08:02:22.907962  501966 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:22.907977  501966 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:22.908040  501966 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
	I0111 08:02:22.908057  501966 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:22.908073  501966 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:22.908438  501966 cli_runner.go:164] Run: docker network inspect force-systemd-env-081796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:02:22.939729  501966 cli_runner.go:211] docker network inspect force-systemd-env-081796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:02:22.939811  501966 network_create.go:284] running [docker network inspect force-systemd-env-081796] to gather additional debugging logs...
	I0111 08:02:22.939829  501966 cli_runner.go:164] Run: docker network inspect force-systemd-env-081796
	W0111 08:02:22.962710  501966 cli_runner.go:211] docker network inspect force-systemd-env-081796 returned with exit code 1
	I0111 08:02:22.962739  501966 network_create.go:287] error running [docker network inspect force-systemd-env-081796]: docker network inspect force-systemd-env-081796: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-081796 not found
	I0111 08:02:22.962752  501966 network_create.go:289] output of [docker network inspect force-systemd-env-081796]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-081796 not found
	
	** /stderr **
	I0111 08:02:22.962897  501966 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:02:22.994741  501966 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
	I0111 08:02:22.995181  501966 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
	I0111 08:02:22.995418  501966 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
	I0111 08:02:22.995729  501966 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e18e058dd41d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:98:90:76:cb:ff} reservation:<nil>}
	I0111 08:02:22.996139  501966 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1cf30}
	I0111 08:02:22.996159  501966 network_create.go:124] attempt to create docker network force-systemd-env-081796 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0111 08:02:22.996231  501966 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-081796 force-systemd-env-081796
	I0111 08:02:23.078335  501966 network_create.go:108] docker network force-systemd-env-081796 192.168.85.0/24 created
	I0111 08:02:23.078381  501966 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-081796" container
	I0111 08:02:23.078453  501966 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:02:23.106480  501966 cli_runner.go:164] Run: docker volume create force-systemd-env-081796 --label name.minikube.sigs.k8s.io=force-systemd-env-081796 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:02:23.137784  501966 oci.go:103] Successfully created a docker volume force-systemd-env-081796
	I0111 08:02:23.137864  501966 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-081796-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-081796 --entrypoint /usr/bin/test -v force-systemd-env-081796:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:02:23.815533  501966 oci.go:107] Successfully prepared a docker volume force-systemd-env-081796
	I0111 08:02:23.815600  501966 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:23.815611  501966 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:02:23.815687  501966 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-081796:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:02:26.256150  501966 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-081796:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (2.440405607s)
	I0111 08:02:26.256183  501966 kic.go:203] duration metric: took 2.440567982s to extract preloaded images to volume ...
	W0111 08:02:26.256316  501966 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:02:26.256442  501966 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:02:26.309808  501966 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-081796 --name force-systemd-env-081796 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-081796 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-081796 --network force-systemd-env-081796 --ip 192.168.85.2 --volume force-systemd-env-081796:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:02:26.611265  501966 cli_runner.go:164] Run: docker container inspect force-systemd-env-081796 --format={{.State.Running}}
	I0111 08:02:26.633398  501966 cli_runner.go:164] Run: docker container inspect force-systemd-env-081796 --format={{.State.Status}}
	I0111 08:02:26.657556  501966 cli_runner.go:164] Run: docker exec force-systemd-env-081796 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:02:26.706102  501966 oci.go:144] the created container "force-systemd-env-081796" has a running status.
	I0111 08:02:26.706129  501966 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa...
	I0111 08:02:26.796190  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:02:26.796302  501966 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:02:26.822813  501966 cli_runner.go:164] Run: docker container inspect force-systemd-env-081796 --format={{.State.Status}}
	I0111 08:02:26.849827  501966 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:02:26.849847  501966 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-081796 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:02:26.903379  501966 cli_runner.go:164] Run: docker container inspect force-systemd-env-081796 --format={{.State.Status}}
	I0111 08:02:26.924843  501966 machine.go:94] provisionDockerMachine start ...
	I0111 08:02:26.924943  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:26.949940  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:26.950257  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:26.950266  501966 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:02:26.950970  501966 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36262->127.0.0.1:33350: read: connection reset by peer
	I0111 08:02:30.119155  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-081796
	
	I0111 08:02:30.119184  501966 ubuntu.go:182] provisioning hostname "force-systemd-env-081796"
	I0111 08:02:30.119279  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:30.151541  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:30.151877  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:30.151894  501966 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-081796 && echo "force-systemd-env-081796" | sudo tee /etc/hostname
	I0111 08:02:30.332472  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-081796
	
	I0111 08:02:30.332548  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:30.360622  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:30.360937  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:30.360962  501966 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-081796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-081796/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-081796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:02:30.515282  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:02:30.515372  501966 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
	I0111 08:02:30.515422  501966 ubuntu.go:190] setting up certificates
	I0111 08:02:30.515445  501966 provision.go:84] configureAuth start
	I0111 08:02:30.515519  501966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-081796
	I0111 08:02:30.538385  501966 provision.go:143] copyHostCerts
	I0111 08:02:30.538435  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:02:30.538469  501966 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
	I0111 08:02:30.538478  501966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:02:30.538582  501966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
	I0111 08:02:30.538679  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:02:30.538696  501966 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
	I0111 08:02:30.538701  501966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:02:30.538733  501966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
	I0111 08:02:30.538787  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:02:30.538802  501966 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
	I0111 08:02:30.538813  501966 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:02:30.538884  501966 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
	I0111 08:02:30.539025  501966 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-081796 san=[127.0.0.1 192.168.85.2 force-systemd-env-081796 localhost minikube]
	I0111 08:02:30.934634  501966 provision.go:177] copyRemoteCerts
	I0111 08:02:30.934800  501966 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:02:30.934880  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:30.955910  501966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa Username:docker}
	I0111 08:02:31.069048  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:02:31.069115  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 08:02:31.093961  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:02:31.094025  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0111 08:02:31.116017  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:02:31.116079  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0111 08:02:31.140302  501966 provision.go:87] duration metric: took 624.819718ms to configureAuth
	I0111 08:02:31.140331  501966 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:02:31.140560  501966 config.go:182] Loaded profile config "force-systemd-env-081796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:02:31.140624  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:31.177012  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:31.177328  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:31.177345  501966 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0111 08:02:31.335164  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0111 08:02:31.335183  501966 ubuntu.go:71] root file system type: overlay
	I0111 08:02:31.335295  501966 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0111 08:02:31.335359  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:31.382575  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:31.383114  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:31.383199  501966 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0111 08:02:31.554107  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0111 08:02:31.554198  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:31.588168  501966 main.go:144] libmachine: Using SSH client type: native
	I0111 08:02:31.588473  501966 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33350 <nil> <nil>}
	I0111 08:02:31.588490  501966 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0111 08:02:32.695031  501966 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2026-01-08 19:56:21.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-11 08:02:31.550156409 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0111 08:02:32.695065  501966 machine.go:97] duration metric: took 5.770202913s to provisionDockerMachine
	I0111 08:02:32.695078  501966 client.go:176] duration metric: took 9.787219361s to LocalClient.Create
	I0111 08:02:32.695131  501966 start.go:167] duration metric: took 9.787269263s to libmachine.API.Create "force-systemd-env-081796"
	I0111 08:02:32.695147  501966 start.go:293] postStartSetup for "force-systemd-env-081796" (driver="docker")
	I0111 08:02:32.695157  501966 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:02:32.695240  501966 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:02:32.695287  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:32.712133  501966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa Username:docker}
	I0111 08:02:32.829750  501966 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:02:32.833642  501966 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:02:32.833667  501966 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:02:32.833687  501966 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
	I0111 08:02:32.833748  501966 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
	I0111 08:02:32.833845  501966 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
	I0111 08:02:32.833852  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
	I0111 08:02:32.833969  501966 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:02:32.844437  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:02:32.873184  501966 start.go:296] duration metric: took 178.007012ms for postStartSetup
	I0111 08:02:32.873790  501966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-081796
	I0111 08:02:32.901521  501966 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/config.json ...
	I0111 08:02:32.901786  501966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:02:32.901833  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:32.931056  501966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa Username:docker}
	I0111 08:02:33.051157  501966 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:02:33.056896  501966 start.go:128] duration metric: took 10.153098874s to createHost
	I0111 08:02:33.056962  501966 start.go:83] releasing machines lock for "force-systemd-env-081796", held for 10.153268691s
	I0111 08:02:33.057058  501966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-081796
	I0111 08:02:33.077874  501966 ssh_runner.go:195] Run: cat /version.json
	I0111 08:02:33.077928  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:33.078211  501966 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:02:33.078279  501966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-081796
	I0111 08:02:33.100418  501966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa Username:docker}
	I0111 08:02:33.114934  501966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33350 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-env-081796/id_rsa Username:docker}
	I0111 08:02:33.360703  501966 ssh_runner.go:195] Run: systemctl --version
	I0111 08:02:33.367774  501966 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:02:33.372074  501966 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:02:33.372132  501966 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:02:33.406379  501966 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:02:33.406408  501966 start.go:496] detecting cgroup driver to use...
	I0111 08:02:33.406426  501966 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:02:33.406543  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:02:33.432985  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:02:33.447570  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:02:33.468363  501966 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:02:33.468450  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:02:33.484866  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:02:33.502211  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:02:33.516458  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:02:33.528623  501966 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:02:33.539371  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:02:33.554154  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:02:33.563882  501966 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:02:33.574313  501966 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:02:33.583933  501966 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:02:33.592228  501966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:02:33.742472  501966 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:02:33.867537  501966 start.go:496] detecting cgroup driver to use...
	I0111 08:02:33.867567  501966 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:02:33.867633  501966 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0111 08:02:33.895342  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:02:33.915901  501966 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:02:33.941204  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:02:33.965505  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:02:33.980114  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:02:33.994926  501966 ssh_runner.go:195] Run: which cri-dockerd
	I0111 08:02:34.003548  501966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0111 08:02:34.015044  501966 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0111 08:02:34.034980  501966 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0111 08:02:34.209112  501966 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0111 08:02:34.370904  501966 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0111 08:02:34.371028  501966 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0111 08:02:34.416988  501966 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0111 08:02:34.429839  501966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:02:34.609845  501966 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0111 08:02:35.364505  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:02:35.386674  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0111 08:02:35.405380  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:02:35.421485  501966 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0111 08:02:35.609879  501966 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0111 08:02:35.726512  501966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:02:35.852797  501966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0111 08:02:35.868482  501966 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0111 08:02:35.881753  501966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:02:35.998778  501966 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0111 08:02:36.074500  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:02:36.090533  501966 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0111 08:02:36.090608  501966 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0111 08:02:36.094695  501966 start.go:574] Will wait 60s for crictl version
	I0111 08:02:36.094760  501966 ssh_runner.go:195] Run: which crictl
	I0111 08:02:36.098439  501966 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:02:36.123258  501966 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.4
	RuntimeApiVersion:  v1
	I0111 08:02:36.123331  501966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:02:36.145422  501966 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:02:36.174083  501966 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
	I0111 08:02:36.174191  501966 cli_runner.go:164] Run: docker network inspect force-systemd-env-081796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:02:36.191028  501966 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0111 08:02:36.195071  501966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:02:36.205138  501966 kubeadm.go:884] updating cluster {Name:force-systemd-env-081796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-081796 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:02:36.205260  501966 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:36.205320  501966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:02:36.225513  501966 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:02:36.225535  501966 docker.go:624] Images already preloaded, skipping extraction
	I0111 08:02:36.225600  501966 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:02:36.244786  501966 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:02:36.244814  501966 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:02:36.244825  501966 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0111 08:02:36.244929  501966 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-081796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-081796 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:02:36.244999  501966 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0111 08:02:36.297104  501966 cni.go:84] Creating CNI manager for ""
	I0111 08:02:36.297134  501966 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:02:36.297154  501966 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:02:36.297175  501966 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-081796 NodeName:force-systemd-env-081796 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:02:36.297311  501966 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-081796"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:02:36.297382  501966 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:02:36.305444  501966 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:02:36.305523  501966 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:02:36.313248  501966 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0111 08:02:36.326215  501966 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:02:36.339266  501966 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0111 08:02:36.351817  501966 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:02:36.355783  501966 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:02:36.365894  501966 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:02:36.496521  501966 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:02:36.520775  501966 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796 for IP: 192.168.85.2
	I0111 08:02:36.520800  501966 certs.go:195] generating shared ca certs ...
	I0111 08:02:36.520816  501966 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:36.521013  501966 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
	I0111 08:02:36.521091  501966 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
	I0111 08:02:36.521106  501966 certs.go:257] generating profile certs ...
	I0111 08:02:36.521178  501966 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.key
	I0111 08:02:36.521226  501966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.crt with IP's: []
	I0111 08:02:36.975573  501966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.crt ...
	I0111 08:02:36.975656  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.crt: {Name:mkb2fe726d6c4fadf68796925702da4da1a36409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:36.975886  501966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.key ...
	I0111 08:02:36.975924  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/client.key: {Name:mk198211da4763e9a28b1a1bd871198b5ad3444f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:36.976066  501966 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key.21db5c5f
	I0111 08:02:36.976207  501966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt.21db5c5f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0111 08:02:37.096394  501966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt.21db5c5f ...
	I0111 08:02:37.096469  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt.21db5c5f: {Name:mk3ed3a7cd21b4fa6bdb4f2ac10f9e60dd758d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:37.096691  501966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key.21db5c5f ...
	I0111 08:02:37.096728  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key.21db5c5f: {Name:mk612c77305ad0559d6eae8b3c1576622e779197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:37.096867  501966 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt.21db5c5f -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt
	I0111 08:02:37.096986  501966 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key.21db5c5f -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key
	I0111 08:02:37.097075  501966 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.key
	I0111 08:02:37.097124  501966 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.crt with IP's: []
	I0111 08:02:37.226415  501966 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.crt ...
	I0111 08:02:37.226449  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.crt: {Name:mka883bdbf4ac8375ce0701ee88ab176aa9547af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:37.226654  501966 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.key ...
	I0111 08:02:37.226669  501966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.key: {Name:mk2b8edece41eeca9a14bae44b15145a942c175c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:37.226759  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:02:37.226780  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:02:37.226793  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:02:37.226809  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:02:37.226820  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:02:37.226859  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:02:37.226875  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:02:37.226886  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:02:37.226942  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
	W0111 08:02:37.226984  501966 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
	I0111 08:02:37.226996  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:02:37.227024  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
	I0111 08:02:37.227051  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:02:37.227081  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
	I0111 08:02:37.227126  501966 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:02:37.227159  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
	I0111 08:02:37.227181  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:02:37.227196  501966 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
	I0111 08:02:37.227811  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:02:37.247362  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:02:37.266237  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:02:37.285168  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:02:37.302822  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:02:37.320907  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:02:37.338292  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:02:37.357023  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-env-081796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:02:37.378315  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
	I0111 08:02:37.402502  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:02:37.422529  501966 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
	I0111 08:02:37.442231  501966 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:02:37.457496  501966 ssh_runner.go:195] Run: openssl version
	I0111 08:02:37.464387  501966 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
	I0111 08:02:37.473062  501966 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
	I0111 08:02:37.481621  501966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
	I0111 08:02:37.485774  501966 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
	I0111 08:02:37.485846  501966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
	I0111 08:02:37.527289  501966 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:02:37.534796  501966 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:02:37.542042  501966 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:02:37.549306  501966 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:02:37.556733  501966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:02:37.560583  501966 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:02:37.560646  501966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:02:37.602423  501966 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:02:37.610722  501966 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:02:37.618126  501966 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
	I0111 08:02:37.625637  501966 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
	I0111 08:02:37.632816  501966 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
	I0111 08:02:37.636461  501966 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
	I0111 08:02:37.636530  501966 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
	I0111 08:02:37.682370  501966 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:02:37.689971  501966 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
	I0111 08:02:37.697309  501966 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:02:37.700881  501966 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:02:37.700965  501966 kubeadm.go:401] StartCluster: {Name:force-systemd-env-081796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-081796 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:02:37.701117  501966 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0111 08:02:37.717639  501966 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:02:37.725968  501966 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:02:37.734467  501966 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:02:37.734565  501966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:02:37.742753  501966 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:02:37.742774  501966 kubeadm.go:158] found existing configuration files:
	
	I0111 08:02:37.742848  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:02:37.750975  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:02:37.751044  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:02:37.759431  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:02:37.767241  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:02:37.767360  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:02:37.774870  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:02:37.782990  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:02:37.783117  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:02:37.790619  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:02:37.798505  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:02:37.798573  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:02:37.806232  501966 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:02:37.844598  501966 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:02:37.844863  501966 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:02:37.935323  501966 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:02:37.935474  501966 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:02:37.935552  501966 kubeadm.go:319] OS: Linux
	I0111 08:02:37.935637  501966 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:02:37.935719  501966 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:02:37.935803  501966 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:02:37.935882  501966 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:02:37.935962  501966 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:02:37.936043  501966 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:02:37.936122  501966 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:02:37.936202  501966 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:02:37.936280  501966 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:02:38.000816  501966 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:02:38.000996  501966 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:02:38.001131  501966 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:02:38.021786  501966 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:02:38.027783  501966 out.go:252]   - Generating certificates and keys ...
	I0111 08:02:38.027975  501966 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:02:38.028096  501966 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:02:38.201414  501966 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:02:38.863889  501966 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:02:39.149542  501966 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:02:39.268633  501966 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:02:39.485709  501966 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:02:39.486134  501966 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:02:39.590515  501966 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:02:39.591119  501966 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0111 08:02:39.768982  501966 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:02:40.087283  501966 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:02:40.891191  501966 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:02:40.891266  501966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:02:41.095744  501966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:02:41.341108  501966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:02:41.433066  501966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:02:41.651300  501966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:02:42.199514  501966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:02:42.200498  501966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:02:42.203812  501966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:02:42.207803  501966 out.go:252]   - Booting up control plane ...
	I0111 08:02:42.207911  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:02:42.207991  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:02:42.209660  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:02:42.239321  501966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:02:42.239438  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:02:42.250893  501966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:02:42.250998  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:02:42.251079  501966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:02:42.437413  501966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:02:42.437541  501966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:06:42.438404  501966 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00132891s
	I0111 08:06:42.438431  501966 kubeadm.go:319] 
	I0111 08:06:42.438488  501966 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:06:42.438521  501966 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:06:42.438626  501966 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:06:42.438631  501966 kubeadm.go:319] 
	I0111 08:06:42.438736  501966 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:06:42.438768  501966 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:06:42.438798  501966 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:06:42.438802  501966 kubeadm.go:319] 
	I0111 08:06:42.449958  501966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:06:42.450529  501966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:06:42.450677  501966 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:06:42.450986  501966 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:06:42.450998  501966 kubeadm.go:319] 
	I0111 08:06:42.451068  501966 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 08:06:42.451206  501966 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132891s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132891s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:06:42.451287  501966 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0111 08:06:42.871378  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:06:42.885470  501966 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:06:42.885556  501966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:06:42.893513  501966 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:06:42.893534  501966 kubeadm.go:158] found existing configuration files:
	
	I0111 08:06:42.893585  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:06:42.901632  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:06:42.901711  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:06:42.909718  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:06:42.917711  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:06:42.917776  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:06:42.925587  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:06:42.933893  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:06:42.933959  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:06:42.941734  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:06:42.949802  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:06:42.949884  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:06:42.957799  501966 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:06:42.997422  501966 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:06:42.997491  501966 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:06:43.079330  501966 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:06:43.079407  501966 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:06:43.079467  501966 kubeadm.go:319] OS: Linux
	I0111 08:06:43.079536  501966 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:06:43.079593  501966 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:06:43.079643  501966 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:06:43.079699  501966 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:06:43.079755  501966 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:06:43.079805  501966 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:06:43.079857  501966 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:06:43.079914  501966 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:06:43.079968  501966 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:06:43.154582  501966 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:06:43.154784  501966 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:06:43.154940  501966 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:06:43.168319  501966 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:06:43.172046  501966 out.go:252]   - Generating certificates and keys ...
	I0111 08:06:43.172140  501966 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:06:43.172224  501966 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:06:43.172318  501966 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:06:43.172392  501966 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:06:43.172475  501966 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:06:43.172541  501966 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:06:43.172617  501966 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:06:43.172695  501966 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:06:43.172891  501966 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:06:43.172967  501966 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:06:43.173195  501966 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:06:43.173342  501966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:06:43.558618  501966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:06:43.710529  501966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:06:44.025926  501966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:06:44.234089  501966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:06:44.851983  501966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:06:44.852625  501966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:06:44.855136  501966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:06:44.858407  501966 out.go:252]   - Booting up control plane ...
	I0111 08:06:44.858523  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:06:44.858601  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:06:44.858683  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:06:44.880530  501966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:06:44.880657  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:06:44.887913  501966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:06:44.888266  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:06:44.888314  501966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:06:45.038206  501966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:06:45.047794  501966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:10:45.038296  501966 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000312126s
	I0111 08:10:45.038337  501966 kubeadm.go:319] 
	I0111 08:10:45.038589  501966 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:10:45.038651  501966 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:10:45.038884  501966 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:10:45.038894  501966 kubeadm.go:319] 
	I0111 08:10:45.039303  501966 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:10:45.039365  501966 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:10:45.039421  501966 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:10:45.039429  501966 kubeadm.go:319] 
	I0111 08:10:45.045107  501966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:10:45.045693  501966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:10:45.045856  501966 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:10:45.046152  501966 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:10:45.046171  501966 kubeadm.go:319] 
	I0111 08:10:45.046242  501966 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:10:45.046320  501966 kubeadm.go:403] duration metric: took 8m7.345360984s to StartCluster
	I0111 08:10:45.046394  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:10:45.046477  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:10:45.120565  501966 cri.go:96] found id: ""
	I0111 08:10:45.120636  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.120646  501966 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:10:45.120659  501966 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 08:10:45.120734  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:10:45.173010  501966 cri.go:96] found id: ""
	I0111 08:10:45.173034  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.173044  501966 logs.go:284] No container was found matching "etcd"
	I0111 08:10:45.173052  501966 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 08:10:45.173164  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:10:45.217457  501966 cri.go:96] found id: ""
	I0111 08:10:45.217482  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.217493  501966 logs.go:284] No container was found matching "coredns"
	I0111 08:10:45.217501  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:10:45.217571  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:10:45.277867  501966 cri.go:96] found id: ""
	I0111 08:10:45.277892  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.277901  501966 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:10:45.277909  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:10:45.277979  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:10:45.312543  501966 cri.go:96] found id: ""
	I0111 08:10:45.312567  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.312577  501966 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:10:45.312584  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:10:45.312651  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:10:45.339479  501966 cri.go:96] found id: ""
	I0111 08:10:45.339505  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.339514  501966 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:10:45.339522  501966 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 08:10:45.339640  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:10:45.386365  501966 cri.go:96] found id: ""
	I0111 08:10:45.386388  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.386398  501966 logs.go:284] No container was found matching "kindnet"
	I0111 08:10:45.386408  501966 logs.go:123] Gathering logs for container status ...
	I0111 08:10:45.386420  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0111 08:10:45.456280  501966 logs.go:123] Gathering logs for kubelet ...
	I0111 08:10:45.456306  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:10:45.524503  501966 logs.go:123] Gathering logs for dmesg ...
	I0111 08:10:45.524543  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:10:45.542963  501966 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:10:45.542995  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:10:45.609485  501966 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:10:45.600807    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.601229    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603149    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603617    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.605081    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:10:45.600807    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.601229    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603149    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603617    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.605081    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:10:45.609509  501966 logs.go:123] Gathering logs for Docker ...
	I0111 08:10:45.609521  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0111 08:10:45.632350  501966 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:10:45.632420  501966 out.go:285] * 
	* 
	W0111 08:10:45.632482  501966 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:10:45.632497  501966 out.go:285] * 
	* 
	W0111 08:10:45.632746  501966 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:10:45.637569  501966 out.go:203] 
	W0111 08:10:45.641373  501966 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:10:45.641421  501966 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:10:45.641440  501966 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:10:45.645193  501966 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-081796 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-11 08:10:46.054151916 +0000 UTC m=+2825.170531725
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-081796
helpers_test.go:244: (dbg) docker inspect force-systemd-env-081796:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50",
	        "Created": "2026-01-11T08:02:26.325130582Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-11T08:02:26.38792925Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c30b0ef598bea80c56dc4b61cd46a579326b46036ca8ef885614e2a49a37d006",
	        "ResolvConfPath": "/var/lib/docker/containers/02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50/hostname",
	        "HostsPath": "/var/lib/docker/containers/02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50/hosts",
	        "LogPath": "/var/lib/docker/containers/02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50/02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50-json.log",
	        "Name": "/force-systemd-env-081796",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-081796:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-081796",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02721c3efb6fdc56611575d6cdebcedc4ee099897f3ca27b089e79455c493b50",
	                "LowerDir": "/var/lib/docker/overlay2/55c8e70566c448b276187e99133efd820774e102570e2015e81c0954fce190c5-init/diff:/var/lib/docker/overlay2/e4b3b3f7b2adc33a7ca49c4e0ccdd05f06b3e555556bac3db149fafb744bb371/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55c8e70566c448b276187e99133efd820774e102570e2015e81c0954fce190c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55c8e70566c448b276187e99133efd820774e102570e2015e81c0954fce190c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55c8e70566c448b276187e99133efd820774e102570e2015e81c0954fce190c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-081796",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-081796/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-081796",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-081796",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-081796",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5c9f4a69b7b36576be45189960e555978c3f26b5870086458862a0c949ed471",
	            "SandboxKey": "/var/run/docker/netns/f5c9f4a69b7b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33352"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-081796": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:56:46:d2:c4:87",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "439631dccde41deb02b86e234a17ba14d2611d3d5a336743b54d5c406bb867aa",
	                    "EndpointID": "a5e247f1e1cc52883ea37bfe24675848746beec8f098c0ff45e7b244dfa859d6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-081796",
	                        "02721c3efb6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-081796 -n force-systemd-env-081796
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-081796 -n force-systemd-env-081796: exit status 6 (369.923771ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:10:46.425213  520348 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-081796" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-081796 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-195160 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cri-dockerd --version                                                                                   │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl cat containerd --no-pager                                                                     │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo cat /etc/containerd/config.toml                                                                         │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo containerd config dump                                                                                  │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo systemctl cat crio --no-pager                                                                           │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ -p cilium-195160 sudo crio config                                                                                             │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p cilium-195160                                                                                                              │ cilium-195160             │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p force-systemd-env-081796 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-081796  │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p NoKubernetes-616586                                                                                                        │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker       │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ ssh     │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet                                                       │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ stop    │ -p NoKubernetes-616586                                                                                                        │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p NoKubernetes-616586 --driver=docker  --container-runtime=docker                                                            │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ ssh     │ -p NoKubernetes-616586 sudo systemctl is-active --quiet service kubelet                                                       │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ delete  │ -p NoKubernetes-616586                                                                                                        │ NoKubernetes-616586       │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │ 11 Jan 26 08:02 UTC │
	│ start   │ -p force-systemd-flag-176470 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-176470 │ jenkins │ v1.37.0 │ 11 Jan 26 08:02 UTC │                     │
	│ ssh     │ force-systemd-env-081796 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-081796  │ jenkins │ v1.37.0 │ 11 Jan 26 08:10 UTC │ 11 Jan 26 08:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 08:02:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 08:02:55.219760  510536 out.go:360] Setting OutFile to fd 1 ...
	I0111 08:02:55.219965  510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:55.219993  510536 out.go:374] Setting ErrFile to fd 2...
	I0111 08:02:55.220012  510536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 08:02:55.220685  510536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 08:02:55.221284  510536 out.go:368] Setting JSON to false
	I0111 08:02:55.222163  510536 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9925,"bootTime":1768108650,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 08:02:55.222344  510536 start.go:143] virtualization:  
	I0111 08:02:55.225197  510536 out.go:179] * [force-systemd-flag-176470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 08:02:55.227752  510536 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 08:02:55.227908  510536 notify.go:221] Checking for updates...
	I0111 08:02:55.233637  510536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 08:02:55.236621  510536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 08:02:55.239599  510536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 08:02:55.242477  510536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 08:02:55.245433  510536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 08:02:55.248887  510536 config.go:182] Loaded profile config "force-systemd-env-081796": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:02:55.249012  510536 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 08:02:55.278955  510536 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 08:02:55.279151  510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:55.340253  510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.330464883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:55.340364  510536 docker.go:319] overlay module found
	I0111 08:02:55.343549  510536 out.go:179] * Using the docker driver based on user configuration
	I0111 08:02:55.346477  510536 start.go:309] selected driver: docker
	I0111 08:02:55.346500  510536 start.go:928] validating driver "docker" against <nil>
	I0111 08:02:55.346516  510536 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 08:02:55.347367  510536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 08:02:55.398049  510536 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-11 08:02:55.38897404 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 08:02:55.398208  510536 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 08:02:55.398435  510536 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 08:02:55.401352  510536 out.go:179] * Using Docker driver with root privileges
	I0111 08:02:55.404169  510536 cni.go:84] Creating CNI manager for ""
	I0111 08:02:55.404240  510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:02:55.404253  510536 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0111 08:02:55.404339  510536 start.go:353] cluster config:
	{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:02:55.407425  510536 out.go:179] * Starting "force-systemd-flag-176470" primary control-plane node in "force-systemd-flag-176470" cluster
	I0111 08:02:55.410185  510536 cache.go:134] Beginning downloading kic base image for docker with docker
	I0111 08:02:55.413170  510536 out.go:179] * Pulling base image v0.0.48-1768032998-22402 ...
	I0111 08:02:55.415999  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:55.416053  510536 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0111 08:02:55.416067  510536 cache.go:65] Caching tarball of preloaded images
	I0111 08:02:55.416071  510536 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 08:02:55.416162  510536 preload.go:251] Found /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0111 08:02:55.416173  510536 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0111 08:02:55.416278  510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
	I0111 08:02:55.416296  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json: {Name:mkca1c7e6f1f75138479137408eba180dfbb6698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:02:55.436232  510536 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon, skipping pull
	I0111 08:02:55.436255  510536 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 exists in daemon, skipping load
	I0111 08:02:55.436276  510536 cache.go:243] Successfully downloaded all kic artifacts
	I0111 08:02:55.436313  510536 start.go:360] acquireMachinesLock for force-systemd-flag-176470: {Name:mk069654716209309832bc30167c071b9142dd8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0111 08:02:55.436420  510536 start.go:364] duration metric: took 86.972µs to acquireMachinesLock for "force-systemd-flag-176470"
	I0111 08:02:55.436450  510536 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0111 08:02:55.436517  510536 start.go:125] createHost starting for "" (driver="docker")
	I0111 08:02:55.440079  510536 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0111 08:02:55.440330  510536 start.go:159] libmachine.API.Create for "force-systemd-flag-176470" (driver="docker")
	I0111 08:02:55.440371  510536 client.go:173] LocalClient.Create starting
	I0111 08:02:55.440473  510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem
	I0111 08:02:55.440510  510536 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:55.440529  510536 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:55.440585  510536 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem
	I0111 08:02:55.440606  510536 main.go:144] libmachine: Decoding PEM data...
	I0111 08:02:55.440635  510536 main.go:144] libmachine: Parsing certificate...
	I0111 08:02:55.441019  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0111 08:02:55.456590  510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0111 08:02:55.456686  510536 network_create.go:284] running [docker network inspect force-systemd-flag-176470] to gather additional debugging logs...
	I0111 08:02:55.456707  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470
	W0111 08:02:55.472891  510536 cli_runner.go:211] docker network inspect force-systemd-flag-176470 returned with exit code 1
	I0111 08:02:55.472925  510536 network_create.go:287] error running [docker network inspect force-systemd-flag-176470]: docker network inspect force-systemd-flag-176470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-176470 not found
	I0111 08:02:55.472944  510536 network_create.go:289] output of [docker network inspect force-systemd-flag-176470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-176470 not found
	
	** /stderr **
	I0111 08:02:55.473054  510536 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:02:55.489682  510536 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
	I0111 08:02:55.490078  510536 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-40d7f82078db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:a2:4c:a4:8c:ba:d2} reservation:<nil>}
	I0111 08:02:55.490313  510536 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-462883b60cc2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:6e:e8:a2:f7:f9:41} reservation:<nil>}
	I0111 08:02:55.490763  510536 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a16310}
	I0111 08:02:55.490793  510536 network_create.go:124] attempt to create docker network force-systemd-flag-176470 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0111 08:02:55.490879  510536 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-176470 force-systemd-flag-176470
	I0111 08:02:55.555925  510536 network_create.go:108] docker network force-systemd-flag-176470 192.168.76.0/24 created
	I0111 08:02:55.555959  510536 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-176470" container
	I0111 08:02:55.556048  510536 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0111 08:02:55.573066  510536 cli_runner.go:164] Run: docker volume create force-systemd-flag-176470 --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true
	I0111 08:02:55.592089  510536 oci.go:103] Successfully created a docker volume force-systemd-flag-176470
	I0111 08:02:55.592203  510536 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-176470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --entrypoint /usr/bin/test -v force-systemd-flag-176470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -d /var/lib
	I0111 08:02:56.131269  510536 oci.go:107] Successfully prepared a docker volume force-systemd-flag-176470
	I0111 08:02:56.131324  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:02:56.131342  510536 kic.go:194] Starting extracting preloaded images to volume ...
	I0111 08:02:56.131410  510536 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir
	I0111 08:02:59.422056  510536 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-176470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 -I lz4 -xf /preloaded.tar -C /extractDir: (3.290602576s)
	I0111 08:02:59.422088  510536 kic.go:203] duration metric: took 3.290742215s to extract preloaded images to volume ...
	W0111 08:02:59.422241  510536 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0111 08:02:59.422362  510536 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0111 08:02:59.471640  510536 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-176470 --name force-systemd-flag-176470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-176470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-176470 --network force-systemd-flag-176470 --ip 192.168.76.2 --volume force-systemd-flag-176470:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615
	I0111 08:02:59.799854  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Running}}
	I0111 08:02:59.823185  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:02:59.842792  510536 cli_runner.go:164] Run: docker exec force-systemd-flag-176470 stat /var/lib/dpkg/alternatives/iptables
	I0111 08:02:59.903035  510536 oci.go:144] the created container "force-systemd-flag-176470" has a running status.
	I0111 08:02:59.903064  510536 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa...
	I0111 08:03:00.642486  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0111 08:03:00.642605  510536 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0111 08:03:00.666293  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:03:00.685140  510536 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0111 08:03:00.685163  510536 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-176470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0111 08:03:00.728771  510536 cli_runner.go:164] Run: docker container inspect force-systemd-flag-176470 --format={{.State.Status}}
	I0111 08:03:00.747446  510536 machine.go:94] provisionDockerMachine start ...
	I0111 08:03:00.747552  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:00.765376  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:00.765734  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:00.765754  510536 main.go:144] libmachine: About to run SSH command:
	hostname
	I0111 08:03:00.766557  510536 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0111 08:03:03.914487  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
	
	I0111 08:03:03.914512  510536 ubuntu.go:182] provisioning hostname "force-systemd-flag-176470"
	I0111 08:03:03.914586  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:03.932237  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:03.932556  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:03.932573  510536 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-176470 && echo "force-systemd-flag-176470" | sudo tee /etc/hostname
	I0111 08:03:04.105837  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-176470
	
	I0111 08:03:04.105961  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.127215  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:04.127623  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:04.127644  510536 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-176470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-176470/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-176470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0111 08:03:04.279132  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0111 08:03:04.279202  510536 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22402-276769/.minikube CaCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22402-276769/.minikube}
	I0111 08:03:04.279237  510536 ubuntu.go:190] setting up certificates
	I0111 08:03:04.279260  510536 provision.go:84] configureAuth start
	I0111 08:03:04.279342  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:04.297243  510536 provision.go:143] copyHostCerts
	I0111 08:03:04.297285  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:03:04.297322  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem, removing ...
	I0111 08:03:04.297328  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem
	I0111 08:03:04.297407  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/ca.pem (1082 bytes)
	I0111 08:03:04.297482  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:03:04.297498  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem, removing ...
	I0111 08:03:04.297502  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem
	I0111 08:03:04.297526  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/cert.pem (1123 bytes)
	I0111 08:03:04.297563  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:03:04.297578  510536 exec_runner.go:144] found /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem, removing ...
	I0111 08:03:04.297583  510536 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem
	I0111 08:03:04.297605  510536 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22402-276769/.minikube/key.pem (1675 bytes)
	I0111 08:03:04.297646  510536 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-176470 san=[127.0.0.1 192.168.76.2 force-systemd-flag-176470 localhost minikube]
	I0111 08:03:04.676341  510536 provision.go:177] copyRemoteCerts
	I0111 08:03:04.676407  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0111 08:03:04.676452  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.695533  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:04.802703  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0111 08:03:04.802763  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0111 08:03:04.821902  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0111 08:03:04.821976  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0111 08:03:04.840427  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0111 08:03:04.840528  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0111 08:03:04.858272  510536 provision.go:87] duration metric: took 578.972579ms to configureAuth
	I0111 08:03:04.858355  510536 ubuntu.go:206] setting minikube options for container-runtime
	I0111 08:03:04.858554  510536 config.go:182] Loaded profile config "force-systemd-flag-176470": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 08:03:04.858617  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:04.880754  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:04.881061  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:04.881071  510536 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0111 08:03:05.036241  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0111 08:03:05.036263  510536 ubuntu.go:71] root file system type: overlay
	I0111 08:03:05.036379  510536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0111 08:03:05.036456  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:05.055990  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:05.056308  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:05.056396  510536 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0111 08:03:05.217159  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0111 08:03:05.217244  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:05.236377  510536 main.go:144] libmachine: Using SSH client type: native
	I0111 08:03:05.236706  510536 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33365 <nil> <nil>}
	I0111 08:03:05.236730  510536 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0111 08:03:06.213777  510536 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2026-01-08 19:56:21.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-11 08:03:05.213214607 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0111 08:03:06.213812  510536 machine.go:97] duration metric: took 5.46634075s to provisionDockerMachine
	I0111 08:03:06.213825  510536 client.go:176] duration metric: took 10.773442328s to LocalClient.Create
	I0111 08:03:06.213873  510536 start.go:167] duration metric: took 10.773542862s to libmachine.API.Create "force-systemd-flag-176470"
	I0111 08:03:06.213889  510536 start.go:293] postStartSetup for "force-systemd-flag-176470" (driver="docker")
	I0111 08:03:06.213900  510536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0111 08:03:06.213976  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0111 08:03:06.214038  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.233489  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.338958  510536 ssh_runner.go:195] Run: cat /etc/os-release
	I0111 08:03:06.342424  510536 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0111 08:03:06.342452  510536 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0111 08:03:06.342463  510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/addons for local assets ...
	I0111 08:03:06.342538  510536 filesync.go:126] Scanning /home/jenkins/minikube-integration/22402-276769/.minikube/files for local assets ...
	I0111 08:03:06.342671  510536 filesync.go:149] local asset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> 2786382.pem in /etc/ssl/certs
	I0111 08:03:06.342685  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /etc/ssl/certs/2786382.pem
	I0111 08:03:06.342793  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0111 08:03:06.351211  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:03:06.369285  510536 start.go:296] duration metric: took 155.381043ms for postStartSetup
	I0111 08:03:06.369638  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:06.399155  510536 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/config.json ...
	I0111 08:03:06.399451  510536 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 08:03:06.399491  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.417476  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.520083  510536 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0111 08:03:06.524815  510536 start.go:128] duration metric: took 11.088280156s to createHost
	I0111 08:03:06.524841  510536 start.go:83] releasing machines lock for "force-systemd-flag-176470", held for 11.088407356s
	I0111 08:03:06.524937  510536 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-176470
	I0111 08:03:06.541461  510536 ssh_runner.go:195] Run: cat /version.json
	I0111 08:03:06.541495  510536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0111 08:03:06.541521  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.541568  510536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-176470
	I0111 08:03:06.561814  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.578227  510536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/force-systemd-flag-176470/id_rsa Username:docker}
	I0111 08:03:06.765197  510536 ssh_runner.go:195] Run: systemctl --version
	I0111 08:03:06.771777  510536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0111 08:03:06.776029  510536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0111 08:03:06.776122  510536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0111 08:03:06.804486  510536 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0111 08:03:06.804565  510536 start.go:496] detecting cgroup driver to use...
	I0111 08:03:06.804592  510536 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:03:06.804767  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:03:06.818674  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0111 08:03:06.828002  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0111 08:03:06.837067  510536 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0111 08:03:06.837138  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0111 08:03:06.845964  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:03:06.855049  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0111 08:03:06.863676  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0111 08:03:06.872497  510536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0111 08:03:06.880973  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0111 08:03:06.890121  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0111 08:03:06.899090  510536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0111 08:03:06.908147  510536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0111 08:03:06.915960  510536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0111 08:03:06.923607  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:07.033909  510536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0111 08:03:07.138594  510536 start.go:496] detecting cgroup driver to use...
	I0111 08:03:07.138622  510536 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0111 08:03:07.138676  510536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0111 08:03:07.154245  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:03:07.172345  510536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0111 08:03:07.221655  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0111 08:03:07.234818  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0111 08:03:07.247793  510536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0111 08:03:07.261501  510536 ssh_runner.go:195] Run: which cri-dockerd
	I0111 08:03:07.264985  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0111 08:03:07.272438  510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0111 08:03:07.284695  510536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0111 08:03:07.404970  510536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0111 08:03:07.524732  510536 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0111 08:03:07.524836  510536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0111 08:03:07.537550  510536 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0111 08:03:07.550391  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:07.666047  510536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0111 08:03:08.113136  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0111 08:03:08.126395  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0111 08:03:08.140492  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:03:08.154283  510536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0111 08:03:08.276152  510536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0111 08:03:08.399843  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:08.519920  510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0111 08:03:08.535880  510536 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0111 08:03:08.548954  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:08.674253  510536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0111 08:03:08.750422  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0111 08:03:08.764674  510536 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0111 08:03:08.764745  510536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0111 08:03:08.770190  510536 start.go:574] Will wait 60s for crictl version
	I0111 08:03:08.770257  510536 ssh_runner.go:195] Run: which crictl
	I0111 08:03:08.773920  510536 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0111 08:03:08.803610  510536 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.4
	RuntimeApiVersion:  v1
	I0111 08:03:08.803693  510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:03:08.828423  510536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0111 08:03:08.856514  510536 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.4 ...
	I0111 08:03:08.856630  510536 cli_runner.go:164] Run: docker network inspect force-systemd-flag-176470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0111 08:03:08.871466  510536 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0111 08:03:08.876325  510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:03:08.886543  510536 kubeadm.go:884] updating cluster {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0111 08:03:08.886659  510536 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0111 08:03:08.886724  510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:03:08.904762  510536 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:03:08.904783  510536 docker.go:624] Images already preloaded, skipping extraction
	I0111 08:03:08.904854  510536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0111 08:03:08.922241  510536 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0111 08:03:08.922264  510536 cache_images.go:86] Images are preloaded, skipping loading
	I0111 08:03:08.922278  510536 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0111 08:03:08.922378  510536 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-176470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0111 08:03:08.922440  510536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0111 08:03:08.974284  510536 cni.go:84] Creating CNI manager for ""
	I0111 08:03:08.974315  510536 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 08:03:08.974351  510536 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0111 08:03:08.974374  510536 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-176470 NodeName:force-systemd-flag-176470 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0111 08:03:08.974535  510536 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-176470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0111 08:03:08.974611  510536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0111 08:03:08.983370  510536 binaries.go:51] Found k8s binaries, skipping transfer
	I0111 08:03:08.983451  510536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0111 08:03:08.991625  510536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0111 08:03:09.006053  510536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0111 08:03:09.021805  510536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0111 08:03:09.035826  510536 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0111 08:03:09.039822  510536 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0111 08:03:09.049986  510536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0111 08:03:09.169621  510536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0111 08:03:09.185723  510536 certs.go:69] Setting up /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470 for IP: 192.168.76.2
	I0111 08:03:09.185747  510536 certs.go:195] generating shared ca certs ...
	I0111 08:03:09.185764  510536 certs.go:227] acquiring lock for ca certs: {Name:mk5238b420a0ee024668d9aed797ac9a441cf30a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.185898  510536 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key
	I0111 08:03:09.185958  510536 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key
	I0111 08:03:09.185971  510536 certs.go:257] generating profile certs ...
	I0111 08:03:09.186038  510536 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key
	I0111 08:03:09.186055  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt with IP's: []
	I0111 08:03:09.419531  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt ...
	I0111 08:03:09.419571  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.crt: {Name:mk9418e58d3186bffe31b727378fd0d08defb8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.419773  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key ...
	I0111 08:03:09.419788  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/client.key: {Name:mk349358a2ff97e24a0ee5565acc755705e64bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.419881  510536 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861
	I0111 08:03:09.419901  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0111 08:03:09.847845  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 ...
	I0111 08:03:09.847876  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861: {Name:mk412a98969fba1e6fc51a9a93b9bc1d873d6a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.848059  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 ...
	I0111 08:03:09.848075  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861: {Name:mkc93f76b0581a0b9e089b7481afceecd0c3c04d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:09.848163  510536 certs.go:382] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt
	I0111 08:03:09.848240  510536 certs.go:386] copying /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key.732f4861 -> /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key
	I0111 08:03:09.848303  510536 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key
	I0111 08:03:09.848323  510536 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt with IP's: []
	I0111 08:03:10.141613  510536 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt ...
	I0111 08:03:10.141647  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt: {Name:mk6dace60bb0b0492d37d0756683e679aa0ab1d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:10.141875  510536 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key ...
	I0111 08:03:10.141891  510536 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key: {Name:mk0b0880a2b49969d86a957c1c38bf80a6fa094b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 08:03:10.141982  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0111 08:03:10.142003  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0111 08:03:10.142022  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0111 08:03:10.142034  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0111 08:03:10.142051  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0111 08:03:10.142068  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0111 08:03:10.142084  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0111 08:03:10.142099  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0111 08:03:10.142154  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem (1338 bytes)
	W0111 08:03:10.142196  510536 certs.go:480] ignoring /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638_empty.pem, impossibly tiny 0 bytes
	I0111 08:03:10.142209  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca-key.pem (1675 bytes)
	I0111 08:03:10.142241  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/ca.pem (1082 bytes)
	I0111 08:03:10.142272  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/cert.pem (1123 bytes)
	I0111 08:03:10.142300  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/key.pem (1675 bytes)
	I0111 08:03:10.142362  510536 certs.go:484] found cert: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem (1708 bytes)
	I0111 08:03:10.142398  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.142416  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem -> /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.142435  510536 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem -> /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.143004  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0111 08:03:10.162328  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0111 08:03:10.184581  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0111 08:03:10.205364  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0111 08:03:10.225605  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0111 08:03:10.244217  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0111 08:03:10.262318  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0111 08:03:10.280609  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/force-systemd-flag-176470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0111 08:03:10.298945  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0111 08:03:10.317711  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/certs/278638.pem --> /usr/share/ca-certificates/278638.pem (1338 bytes)
	I0111 08:03:10.337232  510536 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/ssl/certs/2786382.pem --> /usr/share/ca-certificates/2786382.pem (1708 bytes)
	I0111 08:03:10.355950  510536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0111 08:03:10.369827  510536 ssh_runner.go:195] Run: openssl version
	I0111 08:03:10.376367  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.384503  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0111 08:03:10.392677  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.396870  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 07:24 /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.396985  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0111 08:03:10.438118  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0111 08:03:10.445811  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0111 08:03:10.453210  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.460886  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/278638.pem /etc/ssl/certs/278638.pem
	I0111 08:03:10.468116  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.472747  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 11 07:30 /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.472823  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278638.pem
	I0111 08:03:10.514049  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0111 08:03:10.521615  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/278638.pem /etc/ssl/certs/51391683.0
	I0111 08:03:10.529704  510536 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.537387  510536 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2786382.pem /etc/ssl/certs/2786382.pem
	I0111 08:03:10.545355  510536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.549343  510536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 11 07:30 /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.549411  510536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2786382.pem
	I0111 08:03:10.590601  510536 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0111 08:03:10.598218  510536 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2786382.pem /etc/ssl/certs/3ec20f2e.0
	I0111 08:03:10.605617  510536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0111 08:03:10.609166  510536 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0111 08:03:10.609220  510536 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-176470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-176470 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 08:03:10.609341  510536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0111 08:03:10.628830  510536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0111 08:03:10.640206  510536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0111 08:03:10.649415  510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:03:10.649480  510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:03:10.660656  510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:03:10.660677  510536 kubeadm.go:158] found existing configuration files:
	
	I0111 08:03:10.660739  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:03:10.670232  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:03:10.670316  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:03:10.678581  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:03:10.688924  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:03:10.688993  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:03:10.696341  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:03:10.704448  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:03:10.704518  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:03:10.712096  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:03:10.719777  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:03:10.719863  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:03:10.727911  510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:03:10.845766  510536 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:03:10.846305  510536 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:03:10.931360  510536 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:06:42.438404  501966 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00132891s
	I0111 08:06:42.438431  501966 kubeadm.go:319] 
	I0111 08:06:42.438488  501966 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:06:42.438521  501966 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:06:42.438626  501966 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:06:42.438631  501966 kubeadm.go:319] 
	I0111 08:06:42.438736  501966 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:06:42.438768  501966 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:06:42.438798  501966 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:06:42.438802  501966 kubeadm.go:319] 
	I0111 08:06:42.449958  501966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:06:42.450529  501966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:06:42.450677  501966 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:06:42.450986  501966 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:06:42.450998  501966 kubeadm.go:319] 
	I0111 08:06:42.451068  501966 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0111 08:06:42.451206  501966 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-081796 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132891s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:06:42.451287  501966 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0111 08:06:42.871378  501966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:06:42.885470  501966 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:06:42.885556  501966 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:06:42.893513  501966 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:06:42.893534  501966 kubeadm.go:158] found existing configuration files:
	
	I0111 08:06:42.893585  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:06:42.901632  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:06:42.901711  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:06:42.909718  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:06:42.917711  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:06:42.917776  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:06:42.925587  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:06:42.933893  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:06:42.933959  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:06:42.941734  501966 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:06:42.949802  501966 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:06:42.949884  501966 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:06:42.957799  501966 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:06:42.997422  501966 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:06:42.997491  501966 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:06:43.079330  501966 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:06:43.079407  501966 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:06:43.079467  501966 kubeadm.go:319] OS: Linux
	I0111 08:06:43.079536  501966 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:06:43.079593  501966 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:06:43.079643  501966 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:06:43.079699  501966 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:06:43.079755  501966 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:06:43.079805  501966 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:06:43.079857  501966 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:06:43.079914  501966 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:06:43.079968  501966 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:06:43.154582  501966 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:06:43.154784  501966 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:06:43.154940  501966 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:06:43.168319  501966 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:06:43.172046  501966 out.go:252]   - Generating certificates and keys ...
	I0111 08:06:43.172140  501966 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:06:43.172224  501966 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:06:43.172318  501966 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0111 08:06:43.172392  501966 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0111 08:06:43.172475  501966 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0111 08:06:43.172541  501966 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0111 08:06:43.172617  501966 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0111 08:06:43.172695  501966 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0111 08:06:43.172891  501966 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0111 08:06:43.172967  501966 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0111 08:06:43.173195  501966 kubeadm.go:319] [certs] Using the existing "sa" key
	I0111 08:06:43.173342  501966 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:06:43.558618  501966 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:06:43.710529  501966 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:06:44.025926  501966 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:06:44.234089  501966 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:06:44.851983  501966 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:06:44.852625  501966 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:06:44.855136  501966 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:06:44.858407  501966 out.go:252]   - Booting up control plane ...
	I0111 08:06:44.858523  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:06:44.858601  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:06:44.858683  501966 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:06:44.880530  501966 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:06:44.880657  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:06:44.887913  501966 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:06:44.888266  501966 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:06:44.888314  501966 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:06:45.038206  501966 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:06:45.047794  501966 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:07:15.098226  510536 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:07:15.098261  510536 kubeadm.go:319] 
	I0111 08:07:15.098395  510536 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:07:15.103138  510536 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0111 08:07:15.103229  510536 kubeadm.go:319] [preflight] Running pre-flight checks
	I0111 08:07:15.103392  510536 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0111 08:07:15.103495  510536 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0111 08:07:15.103566  510536 kubeadm.go:319] OS: Linux
	I0111 08:07:15.103647  510536 kubeadm.go:319] CGROUPS_CPU: enabled
	I0111 08:07:15.103732  510536 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0111 08:07:15.103815  510536 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0111 08:07:15.103897  510536 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0111 08:07:15.103980  510536 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0111 08:07:15.104062  510536 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0111 08:07:15.104143  510536 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0111 08:07:15.104224  510536 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0111 08:07:15.104304  510536 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0111 08:07:15.104430  510536 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0111 08:07:15.104597  510536 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0111 08:07:15.104755  510536 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0111 08:07:15.104862  510536 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0111 08:07:15.108159  510536 out.go:252]   - Generating certificates and keys ...
	I0111 08:07:15.108298  510536 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0111 08:07:15.108388  510536 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0111 08:07:15.108475  510536 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0111 08:07:15.108582  510536 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0111 08:07:15.108652  510536 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0111 08:07:15.108742  510536 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0111 08:07:15.108832  510536 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0111 08:07:15.108986  510536 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:07:15.109071  510536 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0111 08:07:15.109237  510536 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0111 08:07:15.109320  510536 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0111 08:07:15.109403  510536 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0111 08:07:15.109483  510536 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0111 08:07:15.109555  510536 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0111 08:07:15.109634  510536 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0111 08:07:15.109706  510536 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0111 08:07:15.109788  510536 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0111 08:07:15.109867  510536 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0111 08:07:15.109933  510536 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0111 08:07:15.110023  510536 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0111 08:07:15.110091  510536 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0111 08:07:15.113314  510536 out.go:252]   - Booting up control plane ...
	I0111 08:07:15.113429  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0111 08:07:15.113518  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0111 08:07:15.113592  510536 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0111 08:07:15.113703  510536 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0111 08:07:15.113801  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0111 08:07:15.113911  510536 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0111 08:07:15.114000  510536 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0111 08:07:15.114043  510536 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0111 08:07:15.114178  510536 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0111 08:07:15.114288  510536 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0111 08:07:15.114363  510536 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000233896s
	I0111 08:07:15.114372  510536 kubeadm.go:319] 
	I0111 08:07:15.114430  510536 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:07:15.114467  510536 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:07:15.114576  510536 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:07:15.114584  510536 kubeadm.go:319] 
	I0111 08:07:15.114691  510536 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:07:15.114727  510536 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:07:15.114763  510536 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W0111 08:07:15.114900  510536 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-176470 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000233896s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0111 08:07:15.114996  510536 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0111 08:07:15.116109  510536 kubeadm.go:319] 
	I0111 08:07:15.534570  510536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 08:07:15.548124  510536 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0111 08:07:15.548189  510536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0111 08:07:15.556213  510536 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0111 08:07:15.556274  510536 kubeadm.go:158] found existing configuration files:
	
	I0111 08:07:15.556335  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0111 08:07:15.563912  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0111 08:07:15.563978  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0111 08:07:15.571080  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0111 08:07:15.578655  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0111 08:07:15.578729  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0111 08:07:15.586262  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0111 08:07:15.593982  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0111 08:07:15.594058  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0111 08:07:15.601473  510536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0111 08:07:15.609148  510536 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0111 08:07:15.609220  510536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0111 08:07:15.616665  510536 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0111 08:07:15.736539  510536 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:07:15.737021  510536 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:07:15.804671  510536 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:10:45.038296  501966 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000312126s
	I0111 08:10:45.038337  501966 kubeadm.go:319] 
	I0111 08:10:45.038589  501966 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0111 08:10:45.038651  501966 kubeadm.go:319] 	- The kubelet is not running
	I0111 08:10:45.038884  501966 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0111 08:10:45.038894  501966 kubeadm.go:319] 
	I0111 08:10:45.039303  501966 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0111 08:10:45.039365  501966 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0111 08:10:45.039421  501966 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0111 08:10:45.039429  501966 kubeadm.go:319] 
	I0111 08:10:45.045107  501966 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0111 08:10:45.045693  501966 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0111 08:10:45.045856  501966 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0111 08:10:45.046152  501966 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0111 08:10:45.046171  501966 kubeadm.go:319] 
	I0111 08:10:45.046242  501966 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0111 08:10:45.046320  501966 kubeadm.go:403] duration metric: took 8m7.345360984s to StartCluster
	I0111 08:10:45.046394  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0111 08:10:45.046477  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0111 08:10:45.120565  501966 cri.go:96] found id: ""
	I0111 08:10:45.120636  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.120646  501966 logs.go:284] No container was found matching "kube-apiserver"
	I0111 08:10:45.120659  501966 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0111 08:10:45.120734  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0111 08:10:45.173010  501966 cri.go:96] found id: ""
	I0111 08:10:45.173034  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.173044  501966 logs.go:284] No container was found matching "etcd"
	I0111 08:10:45.173052  501966 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0111 08:10:45.173164  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0111 08:10:45.217457  501966 cri.go:96] found id: ""
	I0111 08:10:45.217482  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.217493  501966 logs.go:284] No container was found matching "coredns"
	I0111 08:10:45.217501  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0111 08:10:45.217571  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0111 08:10:45.277867  501966 cri.go:96] found id: ""
	I0111 08:10:45.277892  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.277901  501966 logs.go:284] No container was found matching "kube-scheduler"
	I0111 08:10:45.277909  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0111 08:10:45.277979  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0111 08:10:45.312543  501966 cri.go:96] found id: ""
	I0111 08:10:45.312567  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.312577  501966 logs.go:284] No container was found matching "kube-proxy"
	I0111 08:10:45.312584  501966 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0111 08:10:45.312651  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0111 08:10:45.339479  501966 cri.go:96] found id: ""
	I0111 08:10:45.339505  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.339514  501966 logs.go:284] No container was found matching "kube-controller-manager"
	I0111 08:10:45.339522  501966 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0111 08:10:45.339640  501966 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0111 08:10:45.386365  501966 cri.go:96] found id: ""
	I0111 08:10:45.386388  501966 logs.go:282] 0 containers: []
	W0111 08:10:45.386398  501966 logs.go:284] No container was found matching "kindnet"
	I0111 08:10:45.386408  501966 logs.go:123] Gathering logs for container status ...
	I0111 08:10:45.386420  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0111 08:10:45.456280  501966 logs.go:123] Gathering logs for kubelet ...
	I0111 08:10:45.456306  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0111 08:10:45.524503  501966 logs.go:123] Gathering logs for dmesg ...
	I0111 08:10:45.524543  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0111 08:10:45.542963  501966 logs.go:123] Gathering logs for describe nodes ...
	I0111 08:10:45.542995  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0111 08:10:45.609485  501966 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:10:45.600807    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.601229    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603149    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603617    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.605081    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0111 08:10:45.600807    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.601229    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603149    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.603617    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:45.605081    5635 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0111 08:10:45.609509  501966 logs.go:123] Gathering logs for Docker ...
	I0111 08:10:45.609521  501966 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0111 08:10:45.632350  501966 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0111 08:10:45.632420  501966 out.go:285] * 
	W0111 08:10:45.632482  501966 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:10:45.632497  501966 out.go:285] * 
	W0111 08:10:45.632746  501966 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0111 08:10:45.637569  501966 out.go:203] 
	W0111 08:10:45.641373  501966 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000312126s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0111 08:10:45.641421  501966 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0111 08:10:45.641440  501966 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0111 08:10:45.645193  501966 out.go:203] 
	
	
	==> Docker <==
	Jan 11 08:02:34 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:34.858198139Z" level=info msg="Restoring containers: start."
	Jan 11 08:02:34 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:34.871270266Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 11 08:02:34 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:34.883296981Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.292135408Z" level=info msg="Loading containers: done."
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.315113062Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.315294768Z" level=info msg="Docker daemon" commit=08440b6 containerd-snapshotter=false storage-driver=overlay2 version=29.1.4
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.315448208Z" level=info msg="Initializing buildkit"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.338507977Z" level=info msg="Completed buildkit initialization"
	Jan 11 08:02:35 force-systemd-env-081796 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.357703222Z" level=info msg="Daemon has completed initialization"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.370603521Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.370744359Z" level=info msg="API listen on /run/docker.sock"
	Jan 11 08:02:35 force-systemd-env-081796 dockerd[1140]: time="2026-01-11T08:02:35.370760350Z" level=info msg="API listen on [::]:2376"
	Jan 11 08:02:36 force-systemd-env-081796 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Start docker client with request timeout 0s"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Loaded network plugin cni"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Setting cgroupDriver systemd"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 11 08:02:36 force-systemd-env-081796 cri-dockerd[1425]: time="2026-01-11T08:02:36Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 11 08:02:36 force-systemd-env-081796 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0111 08:10:47.041031    5765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:47.041792    5765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:47.043508    5765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:47.044096    5765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0111 08:10:47.045791    5765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan11 06:45] overlayfs: idmapped layers are currently not supported
	[Jan11 06:46] overlayfs: idmapped layers are currently not supported
	[Jan11 06:47] overlayfs: idmapped layers are currently not supported
	[Jan11 06:56] overlayfs: idmapped layers are currently not supported
	[  +5.181200] overlayfs: idmapped layers are currently not supported
	[Jan11 07:00] overlayfs: idmapped layers are currently not supported
	[Jan11 07:01] overlayfs: idmapped layers are currently not supported
	[Jan11 07:06] overlayfs: idmapped layers are currently not supported
	[Jan11 07:07] overlayfs: idmapped layers are currently not supported
	[Jan11 07:08] overlayfs: idmapped layers are currently not supported
	[Jan11 07:09] overlayfs: idmapped layers are currently not supported
	[ +36.684603] overlayfs: idmapped layers are currently not supported
	[Jan11 07:10] overlayfs: idmapped layers are currently not supported
	[Jan11 07:11] overlayfs: idmapped layers are currently not supported
	[Jan11 07:12] overlayfs: idmapped layers are currently not supported
	[ +18.034227] overlayfs: idmapped layers are currently not supported
	[Jan11 07:13] overlayfs: idmapped layers are currently not supported
	[Jan11 07:14] overlayfs: idmapped layers are currently not supported
	[Jan11 07:15] overlayfs: idmapped layers are currently not supported
	[ +23.411747] overlayfs: idmapped layers are currently not supported
	[Jan11 07:16] overlayfs: idmapped layers are currently not supported
	[ +26.028245] overlayfs: idmapped layers are currently not supported
	[Jan11 07:17] overlayfs: idmapped layers are currently not supported
	[Jan11 07:18] overlayfs: idmapped layers are currently not supported
	[Jan11 07:23] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 08:10:47 up  2:53,  0 user,  load average: 0.83, 0.99, 1.84
	Linux force-systemd-env-081796 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 11 08:10:43 force-systemd-env-081796 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:10:44 force-systemd-env-081796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 11 08:10:44 force-systemd-env-081796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:44 force-systemd-env-081796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:44 force-systemd-env-081796 kubelet[5545]: E0111 08:10:44.696999    5545 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:10:44 force-systemd-env-081796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:10:44 force-systemd-env-081796 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:10:45 force-systemd-env-081796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 11 08:10:45 force-systemd-env-081796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:45 force-systemd-env-081796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:45 force-systemd-env-081796 kubelet[5612]: E0111 08:10:45.480239    5612 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:10:45 force-systemd-env-081796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:10:45 force-systemd-env-081796 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:46 force-systemd-env-081796 kubelet[5666]: E0111 08:10:46.250699    5666 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 11 08:10:46 force-systemd-env-081796 kubelet[5745]: E0111 08:10:46.979511    5745 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 11 08:10:46 force-systemd-env-081796 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-081796 -n force-systemd-env-081796
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-081796 -n force-systemd-env-081796: exit status 6 (328.882699ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:10:47.490947  520575 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-081796" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-081796" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-081796" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-081796
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-081796: (1.791180126s)
--- FAIL: TestForceSystemdEnv (506.90s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.4
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 2.82
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.61
22 TestOffline 80.24
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 141.72
29 TestAddons/serial/Volcano 41.59
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 10.88
35 TestAddons/parallel/Registry 16.58
36 TestAddons/parallel/RegistryCreds 0.69
37 TestAddons/parallel/Ingress 17.69
38 TestAddons/parallel/InspektorGadget 11.92
39 TestAddons/parallel/MetricsServer 5.75
41 TestAddons/parallel/CSI 53.37
42 TestAddons/parallel/Headlamp 17.01
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 52.92
45 TestAddons/parallel/NvidiaDevicePlugin 6.47
46 TestAddons/parallel/Yakd 11.71
48 TestAddons/StoppedEnableDisable 11.32
49 TestCertOptions 37.78
50 TestCertExpiration 248.27
51 TestDockerFlags 34.69
58 TestErrorSpam/setup 28.14
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.21
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 11.26
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 72.78
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.81
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
75 TestFunctional/serial/CacheCmd/cache/add_local 1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 41.58
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.25
86 TestFunctional/serial/LogsFileCmd 1.27
87 TestFunctional/serial/InvalidService 4.38
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 12.35
91 TestFunctional/parallel/DryRun 0.61
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.22
97 TestFunctional/parallel/ServiceCmdConnect 8.61
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 20.01
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 2.38
104 TestFunctional/parallel/FileSync 0.39
105 TestFunctional/parallel/CertSync 2.06
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 8.28
130 TestFunctional/parallel/ServiceCmd/List 0.53
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
133 TestFunctional/parallel/ServiceCmd/Format 0.37
134 TestFunctional/parallel/ServiceCmd/URL 0.5
135 TestFunctional/parallel/MountCmd/specific-port 2.56
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.77
137 TestFunctional/parallel/Version/short 0.09
138 TestFunctional/parallel/Version/components 1.19
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.28
144 TestFunctional/parallel/ImageCommands/Setup 0.6
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
155 TestFunctional/parallel/DockerEnv/bash 1.29
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 151.59
164 TestMultiControlPlane/serial/DeployApp 7.78
165 TestMultiControlPlane/serial/PingHostFromPods 1.73
166 TestMultiControlPlane/serial/AddWorkerNode 35.8
167 TestMultiControlPlane/serial/NodeLabels 0.14
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
169 TestMultiControlPlane/serial/CopyFile 20.8
170 TestMultiControlPlane/serial/StopSecondaryNode 12.07
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 48.58
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.36
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 148.96
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.6
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
177 TestMultiControlPlane/serial/StopCluster 33.46
178 TestMultiControlPlane/serial/RestartCluster 67.67
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
180 TestMultiControlPlane/serial/AddSecondaryNode 60.16
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.16
184 TestImageBuild/serial/Setup 28.56
185 TestImageBuild/serial/NormalBuild 1.72
186 TestImageBuild/serial/BuildWithBuildArg 0.94
187 TestImageBuild/serial/BuildWithDockerIgnore 0.76
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.05
193 TestJSONOutput/start/Command 69.67
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.65
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.59
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 6.14
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.23
218 TestKicCustomNetwork/create_custom_network 32.07
219 TestKicCustomNetwork/use_default_bridge_network 30.8
220 TestKicExistingNetwork 28.54
221 TestKicCustomSubnet 30.66
222 TestKicStaticIP 30.6
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 63.73
227 TestMountStart/serial/StartWithMountFirst 10.47
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 10.05
230 TestMountStart/serial/VerifyMountSecond 0.28
231 TestMountStart/serial/DeleteFirst 1.56
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.29
234 TestMountStart/serial/RestartStopped 8.7
235 TestMountStart/serial/VerifyMountPostStop 0.3
238 TestMultiNode/serial/FreshStart2Nodes 82.65
239 TestMultiNode/serial/DeployApp2Nodes 6.04
240 TestMultiNode/serial/PingHostFrom2Pods 1.01
241 TestMultiNode/serial/AddNode 34.91
242 TestMultiNode/serial/MultiNodeLabels 0.1
243 TestMultiNode/serial/ProfileList 0.72
244 TestMultiNode/serial/CopyFile 10.78
245 TestMultiNode/serial/StopNode 2.49
246 TestMultiNode/serial/StartAfterStop 9.45
247 TestMultiNode/serial/RestartKeepsNodes 80.69
248 TestMultiNode/serial/DeleteNode 5.89
249 TestMultiNode/serial/StopMultiNode 21.95
250 TestMultiNode/serial/RestartMultiNode 54.8
251 TestMultiNode/serial/ValidateNameConflict 32.64
258 TestScheduledStopUnix 102.9
259 TestSkaffold 137.59
261 TestInsufficientStorage 10.95
262 TestRunningBinaryUpgrade 367.74
264 TestKubernetesUpgrade 177.5
265 TestMissingContainerUpgrade 90.06
267 TestPause/serial/Start 55.8
268 TestPause/serial/SecondStartNoReconfiguration 42.07
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.15
271 TestNoKubernetes/serial/StartWithK8s 31.82
272 TestPause/serial/Pause 0.85
273 TestPause/serial/VerifyStatus 0.47
274 TestPause/serial/Unpause 0.77
275 TestPause/serial/PauseAgain 1.18
276 TestPause/serial/DeletePaused 2.58
277 TestPause/serial/VerifyDeletedResources 4.57
289 TestNoKubernetes/serial/StartWithStopK8s 13.31
290 TestNoKubernetes/serial/Start 9.69
291 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
292 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
293 TestNoKubernetes/serial/ProfileList 1.06
294 TestNoKubernetes/serial/Stop 1.31
295 TestNoKubernetes/serial/StartNoArgs 7.62
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
297 TestStoppedBinaryUpgrade/Setup 0.86
298 TestStoppedBinaryUpgrade/Upgrade 136.34
299 TestPreload/Start-NoPreload-PullImage 89.22
300 TestStoppedBinaryUpgrade/MinikubeLogs 2.07
308 TestNetworkPlugins/group/auto/Start 48.2
309 TestNetworkPlugins/group/auto/KubeletFlags 0.31
310 TestNetworkPlugins/group/auto/NetCatPod 10.27
311 TestPreload/Restart-With-Preload-Check-User-Image 56.83
312 TestNetworkPlugins/group/auto/DNS 0.27
313 TestNetworkPlugins/group/auto/Localhost 0.17
314 TestNetworkPlugins/group/auto/HairPin 0.16
315 TestNetworkPlugins/group/kindnet/Start 57.19
317 TestNetworkPlugins/group/calico/Start 69.53
318 TestNetworkPlugins/group/kindnet/ControllerPod 6
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
320 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
321 TestNetworkPlugins/group/kindnet/DNS 0.3
322 TestNetworkPlugins/group/kindnet/Localhost 0.21
323 TestNetworkPlugins/group/kindnet/HairPin 0.23
324 TestNetworkPlugins/group/custom-flannel/Start 49.7
325 TestNetworkPlugins/group/calico/ControllerPod 6.01
326 TestNetworkPlugins/group/calico/KubeletFlags 0.31
327 TestNetworkPlugins/group/calico/NetCatPod 12.25
328 TestNetworkPlugins/group/calico/DNS 0.24
329 TestNetworkPlugins/group/calico/Localhost 0.22
330 TestNetworkPlugins/group/calico/HairPin 0.22
331 TestNetworkPlugins/group/false/Start 73.22
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
334 TestNetworkPlugins/group/custom-flannel/DNS 0.22
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
337 TestNetworkPlugins/group/enable-default-cni/Start 67.83
338 TestNetworkPlugins/group/false/KubeletFlags 0.46
339 TestNetworkPlugins/group/false/NetCatPod 10.46
340 TestNetworkPlugins/group/false/DNS 0.24
341 TestNetworkPlugins/group/false/Localhost 0.16
342 TestNetworkPlugins/group/false/HairPin 0.16
343 TestNetworkPlugins/group/flannel/Start 49.73
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
349 TestNetworkPlugins/group/bridge/Start 48.76
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
352 TestNetworkPlugins/group/flannel/NetCatPod 12.35
353 TestNetworkPlugins/group/flannel/DNS 0.24
354 TestNetworkPlugins/group/flannel/Localhost 0.23
355 TestNetworkPlugins/group/flannel/HairPin 0.21
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
357 TestNetworkPlugins/group/bridge/NetCatPod 12.49
358 TestNetworkPlugins/group/kubenet/Start 70.25
359 TestNetworkPlugins/group/bridge/DNS 0.26
360 TestNetworkPlugins/group/bridge/Localhost 0.2
361 TestNetworkPlugins/group/bridge/HairPin 0.2
363 TestStartStop/group/old-k8s-version/serial/FirstStart 88.15
364 TestNetworkPlugins/group/kubenet/KubeletFlags 0.37
365 TestNetworkPlugins/group/kubenet/NetCatPod 10.37
366 TestNetworkPlugins/group/kubenet/DNS 0.22
367 TestNetworkPlugins/group/kubenet/Localhost 0.36
368 TestNetworkPlugins/group/kubenet/HairPin 0.37
370 TestStartStop/group/no-preload/serial/FirstStart 74.55
371 TestStartStop/group/old-k8s-version/serial/DeployApp 11.5
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.48
373 TestStartStop/group/old-k8s-version/serial/Stop 11.4
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
375 TestStartStop/group/old-k8s-version/serial/SecondStart 30.63
376 TestStartStop/group/no-preload/serial/DeployApp 9.36
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11
378 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
379 TestStartStop/group/no-preload/serial/Stop 11.68
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
382 TestStartStop/group/old-k8s-version/serial/Pause 2.91
383 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
384 TestStartStop/group/no-preload/serial/SecondStart 57.58
386 TestStartStop/group/embed-certs/serial/FirstStart 75.33
387 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
389 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
390 TestStartStop/group/no-preload/serial/Pause 3.17
392 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.04
393 TestStartStop/group/embed-certs/serial/DeployApp 11.45
394 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.48
395 TestStartStop/group/embed-certs/serial/Stop 11.57
396 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
397 TestStartStop/group/embed-certs/serial/SecondStart 53.06
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
399 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
400 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
401 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.45
402 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
403 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
404 TestStartStop/group/embed-certs/serial/Pause 3.72
405 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.44
406 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 36.43
408 TestStartStop/group/newest-cni/serial/FirstStart 40.76
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
411 TestStartStop/group/newest-cni/serial/DeployApp 0
412 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
413 TestStartStop/group/newest-cni/serial/Stop 11.7
414 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
415 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
416 TestPreload/PreloadSrc/gcs 4.44
417 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
418 TestStartStop/group/newest-cni/serial/SecondStart 17.15
419 TestPreload/PreloadSrc/github 3.97
420 TestPreload/PreloadSrc/gcs-cached 0.59
421 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
422 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
423 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
424 TestStartStop/group/newest-cni/serial/Pause 3.22
x
+
TestDownloadOnly/v1.28.0/json-events (9.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-718118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-718118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.403576273s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0111 07:23:50.325786  278638 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0111 07:23:50.325872  278638 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-718118
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-718118: exit status 85 (91.57687ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-718118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-718118 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:23:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:23:40.967804  278644 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:23:40.968003  278644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:40.968016  278644 out.go:374] Setting ErrFile to fd 2...
	I0111 07:23:40.968021  278644 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:40.968275  278644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	W0111 07:23:40.968411  278644 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22402-276769/.minikube/config/config.json: open /home/jenkins/minikube-integration/22402-276769/.minikube/config/config.json: no such file or directory
	I0111 07:23:40.968804  278644 out.go:368] Setting JSON to true
	I0111 07:23:40.969668  278644 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7571,"bootTime":1768108650,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 07:23:40.969741  278644 start.go:143] virtualization:  
	I0111 07:23:40.975336  278644 out.go:99] [download-only-718118] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0111 07:23:40.975572  278644 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball: no such file or directory
	I0111 07:23:40.975625  278644 notify.go:221] Checking for updates...
	I0111 07:23:40.978785  278644 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:23:40.982005  278644 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:23:40.985212  278644 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 07:23:40.988258  278644 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 07:23:40.991277  278644 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 07:23:40.997287  278644 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:23:40.997584  278644 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:23:41.022689  278644 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:23:41.022803  278644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:41.084954  278644 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 07:23:41.075842168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:23:41.085059  278644 docker.go:319] overlay module found
	I0111 07:23:41.088217  278644 out.go:99] Using the docker driver based on user configuration
	I0111 07:23:41.088260  278644 start.go:309] selected driver: docker
	I0111 07:23:41.088268  278644 start.go:928] validating driver "docker" against <nil>
	I0111 07:23:41.088377  278644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:41.141465  278644 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-11 07:23:41.132646658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:23:41.141632  278644 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:23:41.141894  278644 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 07:23:41.142047  278644 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:23:41.145207  278644 out.go:171] Using Docker driver with root privileges
	I0111 07:23:41.148154  278644 cni.go:84] Creating CNI manager for ""
	I0111 07:23:41.148236  278644 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0111 07:23:41.148253  278644 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0111 07:23:41.148343  278644 start.go:353] cluster config:
	{Name:download-only-718118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-718118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:23:41.151387  278644 out.go:99] Starting "download-only-718118" primary control-plane node in "download-only-718118" cluster
	I0111 07:23:41.151406  278644 cache.go:134] Beginning downloading kic base image for docker with docker
	I0111 07:23:41.154222  278644 out.go:99] Pulling base image v0.0.48-1768032998-22402 ...
	I0111 07:23:41.154257  278644 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0111 07:23:41.154409  278644 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local docker daemon
	I0111 07:23:41.170341  278644 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:23:41.170524  278644 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 in local cache directory
	I0111 07:23:41.170622  278644 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 to local cache
	I0111 07:23:41.197975  278644 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0111 07:23:41.198001  278644 cache.go:65] Caching tarball of preloaded images
	I0111 07:23:41.198180  278644 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0111 07:23:41.201405  278644 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0111 07:23:41.201426  278644 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0111 07:23:41.201434  278644 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I0111 07:23:41.277091  278644 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I0111 07:23:41.277230  278644 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0111 07:23:44.427296  278644 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0111 07:23:44.427761  278644 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/download-only-718118/config.json ...
	I0111 07:23:44.427800  278644 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/download-only-718118/config.json: {Name:mke487695238cef23164db1e3de926888c1c631b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0111 07:23:44.428008  278644 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0111 07:23:44.428255  278644 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22402-276769/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-718118 host does not exist
	  To start a cluster, run: "minikube start -p download-only-718118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-718118
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-258853 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-258853 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (2.824108672s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0111 07:23:53.593712  278638 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0111 07:23:53.593749  278638 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-258853
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-258853: exit status 85 (89.2306ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-718118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-718118 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ delete  │ -p download-only-718118                                                                                                                                                       │ download-only-718118 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │ 11 Jan 26 07:23 UTC │
	│ start   │ -o=json --download-only -p download-only-258853 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-258853 │ jenkins │ v1.37.0 │ 11 Jan 26 07:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/11 07:23:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0111 07:23:50.811096  278842 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:23:50.811212  278842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:50.811224  278842 out.go:374] Setting ErrFile to fd 2...
	I0111 07:23:50.811230  278842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:23:50.811491  278842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:23:50.811894  278842 out.go:368] Setting JSON to true
	I0111 07:23:50.812663  278842 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7581,"bootTime":1768108650,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 07:23:50.812732  278842 start.go:143] virtualization:  
	I0111 07:23:50.815963  278842 out.go:99] [download-only-258853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 07:23:50.816158  278842 notify.go:221] Checking for updates...
	I0111 07:23:50.819099  278842 out.go:171] MINIKUBE_LOCATION=22402
	I0111 07:23:50.822392  278842 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:23:50.825323  278842 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 07:23:50.828208  278842 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 07:23:50.831042  278842 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0111 07:23:50.836671  278842 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0111 07:23:50.836922  278842 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:23:50.867458  278842 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:23:50.867565  278842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:50.923854  278842 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 07:23:50.914959436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:23:50.923958  278842 docker.go:319] overlay module found
	I0111 07:23:50.926955  278842 out.go:99] Using the docker driver based on user configuration
	I0111 07:23:50.927007  278842 start.go:309] selected driver: docker
	I0111 07:23:50.927014  278842 start.go:928] validating driver "docker" against <nil>
	I0111 07:23:50.927113  278842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:23:50.976089  278842 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-11 07:23:50.967489113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:23:50.976264  278842 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0111 07:23:50.976521  278842 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0111 07:23:50.976678  278842 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0111 07:23:50.979731  278842 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-258853 host does not exist
	  To start a cluster, run: "minikube start -p download-only-258853"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-258853
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I0111 07:23:54.754155  278638 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-676209 --alsologtostderr --binary-mirror http://127.0.0.1:37233 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-676209" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-676209
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (80.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-836361 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-836361 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m17.548271465s)
helpers_test.go:176: Cleaning up "offline-docker-836361" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-836361
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-836361: (2.695283765s)
--- PASS: TestOffline (80.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-664377
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-664377: exit status 85 (66.877967ms)

                                                
                                                
-- stdout --
	* Profile "addons-664377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-664377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-664377
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-664377: exit status 85 (71.361019ms)

                                                
                                                
-- stdout --
	* Profile "addons-664377" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-664377"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (141.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-664377 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-664377 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m21.717534456s)
--- PASS: TestAddons/Setup (141.72s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 43.674078ms
addons_test.go:870: volcano-scheduler stabilized in 44.156141ms
addons_test.go:886: volcano-controller stabilized in 44.319025ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-28nmk" [56c6eb2c-e41f-4578-91a6-6b2260205a95] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003000476s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-8kv5t" [038a00ea-a359-4e62-bb0f-6e5b2889453c] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002803308s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-wbp7s" [46d1190b-f50c-4708-9b7d-67502b2db74a] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002902411s
addons_test.go:905: (dbg) Run:  kubectl --context addons-664377 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-664377 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-664377 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [70408dc7-7420-41b7-a786-9d91537b53df] Pending
helpers_test.go:353: "test-job-nginx-0" [70408dc7-7420-41b7-a786-9d91537b53df] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [70408dc7-7420-41b7-a786-9d91537b53df] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003732746s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable volcano --alsologtostderr -v=1: (11.932959571s)
--- PASS: TestAddons/serial/Volcano (41.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-664377 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-664377 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-664377 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-664377 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [75c31f88-a833-4626-95bc-916249c2c53e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [75c31f88-a833-4626-95bc-916249c2c53e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003378792s
addons_test.go:696: (dbg) Run:  kubectl --context addons-664377 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-664377 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-664377 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-664377 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 21.336122ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-w9gz4" [05650128-6b6a-4e23-b985-6e94468fb63d] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00269227s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-h7d86" [219d88e4-b13f-44fd-b25c-f792a8979f5f] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003326446s
addons_test.go:394: (dbg) Run:  kubectl --context addons-664377 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-664377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-664377 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.575421291s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 ip
2026/01/11 07:27:35 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.58s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.499999ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-664377
addons_test.go:334: (dbg) Run:  kubectl --context addons-664377 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-664377 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-664377 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-664377 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [acfaf985-2f13-456d-8bf1-fe93351fd634] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [acfaf985-2f13-456d-8bf1-fe93351fd634] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 7.003988759s
I0111 07:28:01.574466  278638 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-664377 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable ingress-dns --alsologtostderr -v=1: (1.000252919s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable ingress --alsologtostderr -v=1: (7.814579346s)
--- PASS: TestAddons/parallel/Ingress (17.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-dftwd" [b977f53a-6a42-431c-969c-614f8be6f51f] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004008274s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable inspektor-gadget --alsologtostderr -v=1: (5.912197829s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.713634ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-wt42c" [6dbbb236-7172-425b-8a1d-083e9fc367fc] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003800746s
addons_test.go:465: (dbg) Run:  kubectl --context addons-664377 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0111 07:27:35.779613  278638 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0111 07:27:35.785774  278638 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0111 07:27:35.785805  278638 kapi.go:107] duration metric: took 10.093069ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 10.106181ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-664377 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-664377 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [9a1d1e58-c59b-4896-8a35-e43feb8617b9] Pending
helpers_test.go:353: "task-pv-pod" [9a1d1e58-c59b-4896-8a35-e43feb8617b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [9a1d1e58-c59b-4896-8a35-e43feb8617b9] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003351013s
addons_test.go:574: (dbg) Run:  kubectl --context addons-664377 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-664377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-664377 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-664377 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-664377 delete pod task-pv-pod: (1.021860882s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-664377 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-664377 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-664377 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [6e3004bd-8aab-4aab-9f03-ea2f3f9cfa62] Pending
helpers_test.go:353: "task-pv-pod-restore" [6e3004bd-8aab-4aab-9f03-ea2f3f9cfa62] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003727526s
addons_test.go:616: (dbg) Run:  kubectl --context addons-664377 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-664377 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-664377 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.812052284s)
--- PASS: TestAddons/parallel/CSI (53.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-664377 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-856fd" [94da7fd3-a887-4f05-bd80-5b26ab136e92] Pending
helpers_test.go:353: "headlamp-6d8d595f-856fd" [94da7fd3-a887-4f05-bd80-5b26ab136e92] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-856fd" [94da7fd3-a887-4f05-bd80-5b26ab136e92] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004041967s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable headlamp --alsologtostderr -v=1: (6.100909822s)
--- PASS: TestAddons/parallel/Headlamp (17.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-pz7lz" [639260d6-dae3-405f-aaa6-2143e99c7a59] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004139121s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-664377 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-664377 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [166d70ab-bdd9-41af-9656-0b0cb590ee0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [166d70ab-bdd9-41af-9656-0b0cb590ee0d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [166d70ab-bdd9-41af-9656-0b0cb590ee0d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004058491s
addons_test.go:969: (dbg) Run:  kubectl --context addons-664377 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 ssh "cat /opt/local-path-provisioner/pvc-26375c24-2083-43d0-b449-5fd92a42642f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-664377 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-664377 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.828379953s)
--- PASS: TestAddons/parallel/LocalPath (52.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-9sqh7" [21ca1873-ac92-4fef-bc06-b0e9589983f0] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003055645s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-m8cct" [6f99c31c-f13d-4799-ab8c-991e4b1caac7] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002811811s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-664377 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-664377 addons disable yakd --alsologtostderr -v=1: (5.711231104s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-664377
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-664377: (11.042805412s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-664377
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-664377
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-664377
--- PASS: TestAddons/StoppedEnableDisable (11.32s)

                                                
                                    
x
+
TestCertOptions (37.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-280154 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-280154 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.295077717s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-280154 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-280154 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-280154 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-280154" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-280154
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-280154: (2.500344673s)
--- PASS: TestCertOptions (37.78s)

                                                
                                    
x
+
TestCertExpiration (248.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-019756 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-019756 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (36.025303357s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-019756 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0111 08:15:02.801385  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-019756 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.783923578s)
helpers_test.go:176: Cleaning up "cert-expiration-019756" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-019756
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-019756: (2.454959937s)
--- PASS: TestCertExpiration (248.27s)

                                                
                                    
x
+
TestDockerFlags (34.69s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-747538 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0111 08:11:17.322460  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:11:17.607028  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-747538 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.339872932s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-747538 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-747538 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-747538" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-747538
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-747538: (2.426936927s)
--- PASS: TestDockerFlags (34.69s)

                                                
                                    
x
+
TestErrorSpam/setup (28.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-793156 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-793156 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-793156 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-793156 --driver=docker  --container-runtime=docker: (28.144071349s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (28.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (11.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 stop: (11.055777928s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-793156 --log_dir /tmp/nospam-793156 stop
--- PASS: TestErrorSpam/stop (11.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22402-276769/.minikube/files/etc/test/nested/copy/278638/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (72.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0111 07:31:17.329617  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.335493  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.345820  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.366194  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.406560  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.486932  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.647332  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:17.967961  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:18.608881  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:19.889129  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:22.450424  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:27.570678  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-480092 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m12.783051323s)
--- PASS: TestFunctional/serial/StartWithProxy (72.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0111 07:31:35.524705  278638 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --alsologtostderr -v=8
E0111 07:31:37.810960  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:31:58.291852  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-480092 --alsologtostderr -v=8: (41.80621692s)
functional_test.go:678: soft start took 41.810683205s for "functional-480092" cluster.
I0111 07:32:17.331246  278638 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (41.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-480092 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:3.1: (1.106624957s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:3.3: (1.136534831s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 cache add registry.k8s.io/pause:latest: (1.036048127s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-480092 /tmp/TestFunctionalserialCacheCmdcacheadd_local3879116474/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache add minikube-local-cache-test:functional-480092
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache delete minikube-local-cache-test:functional-480092
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-480092
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.498905ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 kubectl -- --context functional-480092 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-480092 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0111 07:32:39.252103  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-480092 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.57728341s)
functional_test.go:776: restart took 41.577376552s for "functional-480092" cluster.
I0111 07:33:05.735944  278638 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-480092 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 logs: (1.248579852s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 logs --file /tmp/TestFunctionalserialLogsFileCmd1963024708/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 logs --file /tmp/TestFunctionalserialLogsFileCmd1963024708/001/logs.txt: (1.268810367s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-480092 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-480092
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-480092: exit status 115 (388.22576ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30822 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-480092 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 config get cpus: exit status 14 (85.955958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 config get cpus: exit status 14 (64.156671ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-480092 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-480092 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 320899: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-480092 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (252.838842ms)

                                                
                                                
-- stdout --
	* [functional-480092] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:33:44.042296  320287 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:33:44.042440  320287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:33:44.042467  320287 out.go:374] Setting ErrFile to fd 2...
	I0111 07:33:44.042486  320287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:33:44.042767  320287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:33:44.043216  320287 out.go:368] Setting JSON to false
	I0111 07:33:44.044194  320287 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8174,"bootTime":1768108650,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 07:33:44.044290  320287 start.go:143] virtualization:  
	I0111 07:33:44.047635  320287 out.go:179] * [functional-480092] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0111 07:33:44.050651  320287 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:33:44.050734  320287 notify.go:221] Checking for updates...
	I0111 07:33:44.057767  320287 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:33:44.060693  320287 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 07:33:44.063663  320287 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 07:33:44.066536  320287 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 07:33:44.069432  320287 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:33:44.072953  320287 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:33:44.073533  320287 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:33:44.124617  320287 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:33:44.124722  320287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:33:44.190453  320287 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 07:33:44.180598388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:33:44.190565  320287 docker.go:319] overlay module found
	I0111 07:33:44.193673  320287 out.go:179] * Using the docker driver based on existing profile
	I0111 07:33:44.196564  320287 start.go:309] selected driver: docker
	I0111 07:33:44.196582  320287 start.go:928] validating driver "docker" against &{Name:functional-480092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-480092 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:33:44.196685  320287 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:33:44.200223  320287 out.go:203] 
	W0111 07:33:44.204030  320287 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0111 07:33:44.207108  320287 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-480092 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-480092 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (253.841351ms)

                                                
                                                
-- stdout --
	* [functional-480092] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:33:43.775779  320205 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:33:43.775912  320205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:33:43.775923  320205 out.go:374] Setting ErrFile to fd 2...
	I0111 07:33:43.775928  320205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:33:43.776954  320205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:33:43.777346  320205 out.go:368] Setting JSON to false
	I0111 07:33:43.778295  320205 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":8174,"bootTime":1768108650,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0111 07:33:43.778366  320205 start.go:143] virtualization:  
	I0111 07:33:43.781873  320205 out.go:179] * [functional-480092] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0111 07:33:43.785082  320205 out.go:179]   - MINIKUBE_LOCATION=22402
	I0111 07:33:43.785168  320205 notify.go:221] Checking for updates...
	I0111 07:33:43.791817  320205 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0111 07:33:43.794717  320205 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	I0111 07:33:43.797599  320205 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	I0111 07:33:43.800483  320205 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0111 07:33:43.804640  320205 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0111 07:33:43.809265  320205 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:33:43.810236  320205 driver.go:422] Setting default libvirt URI to qemu:///system
	I0111 07:33:43.847008  320205 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0111 07:33:43.847127  320205 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:33:43.943070  320205 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-11 07:33:43.931716298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:33:43.943169  320205 docker.go:319] overlay module found
	I0111 07:33:43.946358  320205 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0111 07:33:43.950097  320205 start.go:309] selected driver: docker
	I0111 07:33:43.950122  320205 start.go:928] validating driver "docker" against &{Name:functional-480092 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1768032998-22402@sha256:83181c7d554248f6da9fea13a1b8f8bceb119689247f97c067191ee5aa1ac615 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-480092 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0111 07:33:43.950231  320205 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0111 07:33:43.953781  320205 out.go:203] 
	W0111 07:33:43.956601  320205 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0111 07:33:43.959277  320205 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-480092 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-480092 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-rpw2m" [5d045ad4-adc3-4991-91a7-9299b3c49dbf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-rpw2m" [5d045ad4-adc3-4991-91a7-9299b3c49dbf] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003216181s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:32607
functional_test.go:1685: http://192.168.49.2:32607: success! body:
Request served by hello-node-connect-5d95464fd4-rpw2m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32607
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [e384c2d9-a6ce-48c7-8944-afce6f1a8645] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003509881s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-480092 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-480092 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-480092 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-480092 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [1efde35c-434d-4887-b61a-fc1840e531bc] Pending
helpers_test.go:353: "sp-pod" [1efde35c-434d-4887-b61a-fc1840e531bc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003509943s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-480092 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-480092 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-480092 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [649a205c-a9e3-41e0-80ac-d86d70c5b4fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [649a205c-a9e3-41e0-80ac-d86d70c5b4fa] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003767627s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-480092 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh -n functional-480092 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cp functional-480092:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1306336136/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh -n functional-480092 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh -n functional-480092 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/278638/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /etc/test/nested/copy/278638/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/278638.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /etc/ssl/certs/278638.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/278638.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /usr/share/ca-certificates/278638.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/2786382.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /etc/ssl/certs/2786382.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/2786382.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /usr/share/ca-certificates/2786382.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-480092 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh "sudo systemctl is-active crio": exit status 1 (344.449768ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 317033: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-480092 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [3f6b78b7-628e-4109-9424-457d09a2e7ed] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [3f6b78b7-628e-4109-9424-457d09a2e7ed] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004061864s
I0111 07:33:23.963992  278638 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-480092 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.12.14 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-480092 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-480092 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-480092 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-q9ml4" [fadc7d33-5b6b-4b18-934f-0f17a6e029fc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-q9ml4" [fadc7d33-5b6b-4b18-934f-0f17a6e029fc] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003975992s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "368.84926ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "60.283017ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "368.458402ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "50.965835ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdany-port3703048204/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768116816437961631" to /tmp/TestFunctionalparallelMountCmdany-port3703048204/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768116816437961631" to /tmp/TestFunctionalparallelMountCmdany-port3703048204/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768116816437961631" to /tmp/TestFunctionalparallelMountCmdany-port3703048204/001/test-1768116816437961631
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.526956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:33:36.759524  278638 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 11 07:33 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 11 07:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 11 07:33 test-1768116816437961631
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh cat /mount-9p/test-1768116816437961631
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-480092 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [18200fec-aeaf-4a41-858c-8554564d34ce] Pending
helpers_test.go:353: "busybox-mount" [18200fec-aeaf-4a41-858c-8554564d34ce] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [18200fec-aeaf-4a41-858c-8554564d34ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [18200fec-aeaf-4a41-858c-8554564d34ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005051887s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-480092 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdany-port3703048204/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service list -o json
functional_test.go:1509: Took "523.667925ms" to run "out/minikube-linux-arm64 -p functional-480092 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30478
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30478
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdspecific-port1035357890/001:/mount-9p --alsologtostderr -v=1 --port 45745]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (539.140347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:33:45.254049  278638 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdspecific-port1035357890/001:/mount-9p --alsologtostderr -v=1 --port 45745] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh "sudo umount -f /mount-9p": exit status 1 (352.86817ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-480092 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdspecific-port1035357890/001:/mount-9p --alsologtostderr -v=1 --port 45745] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T" /mount1: exit status 1 (1.082528618s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0111 07:33:48.361764  278638 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-480092 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-480092 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1690141526/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 version -o=json --components: (1.186812803s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-480092 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-480092
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-480092 image ls --format short --alsologtostderr:
I0111 07:33:59.082418  323273 out.go:360] Setting OutFile to fd 1 ...
I0111 07:33:59.082583  323273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.082589  323273 out.go:374] Setting ErrFile to fd 2...
I0111 07:33:59.082594  323273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.082894  323273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 07:33:59.083512  323273 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.083633  323273 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.084180  323273 cli_runner.go:164] Run: docker container inspect functional-480092 --format={{.State.Status}}
I0111 07:33:59.108027  323273 ssh_runner.go:195] Run: systemctl --version
I0111 07:33:59.108071  323273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-480092
I0111 07:33:59.127752  323273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/functional-480092/id_rsa Username:docker}
I0111 07:33:59.233448  323273 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-480092 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-480092 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-480092 │ 76b93afccc6d0 │ 30B    │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 611c6647fcbbc │ 61.2MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-480092 image ls --format table --alsologtostderr:
I0111 07:34:00.164894  323610 out.go:360] Setting OutFile to fd 1 ...
I0111 07:34:00.167081  323610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:34:00.167153  323610 out.go:374] Setting ErrFile to fd 2...
I0111 07:34:00.167176  323610 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:34:00.167668  323610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 07:34:00.168771  323610 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:34:00.169014  323610 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:34:00.169883  323610 cli_runner.go:164] Run: docker container inspect functional-480092 --format={{.State.Status}}
I0111 07:34:00.218346  323610 ssh_runner.go:195] Run: systemctl --version
I0111 07:34:00.218623  323610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-480092
I0111 07:34:00.256826  323610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/functional-480092/id_rsa Username:docker}
I0111 07:34:00.414802  323610 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-480092 image ls --format json --alsologtostderr:
[{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbc
de6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"61200000"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"76b93afccc6d0bcb8b1e8cbe8c764b6990f4a65b9255f28e
dea4053d558beb82","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-480092"],"size":"30"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-480092 image ls --format json --alsologtostderr:
I0111 07:33:59.825923  323538 out.go:360] Setting OutFile to fd 1 ...
I0111 07:33:59.826463  323538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.826501  323538 out.go:374] Setting ErrFile to fd 2...
I0111 07:33:59.826521  323538 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.826813  323538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 07:33:59.827654  323538 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.827884  323538 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.828671  323538 cli_runner.go:164] Run: docker container inspect functional-480092 --format={{.State.Status}}
I0111 07:33:59.849125  323538 ssh_runner.go:195] Run: systemctl --version
I0111 07:33:59.849175  323538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-480092
I0111 07:33:59.868959  323538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/functional-480092/id_rsa Username:docker}
I0111 07:33:59.980810  323538 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-480092 image ls --format yaml --alsologtostderr:
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "61200000"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 76b93afccc6d0bcb8b1e8cbe8c764b6990f4a65b9255f28edea4053d558beb82
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-480092
size: "30"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-480092 image ls --format yaml --alsologtostderr:
I0111 07:33:59.549695  323448 out.go:360] Setting OutFile to fd 1 ...
I0111 07:33:59.549859  323448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.549867  323448 out.go:374] Setting ErrFile to fd 2...
I0111 07:33:59.549872  323448 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.550122  323448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 07:33:59.550815  323448 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.550987  323448 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.551562  323448 cli_runner.go:164] Run: docker container inspect functional-480092 --format={{.State.Status}}
I0111 07:33:59.580780  323448 ssh_runner.go:195] Run: systemctl --version
I0111 07:33:59.580840  323448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-480092
I0111 07:33:59.602020  323448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/functional-480092/id_rsa Username:docker}
I0111 07:33:59.705517  323448 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-480092 ssh pgrep buildkitd: exit status 1 (358.069068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image build -t localhost/my-image:functional-480092 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-480092 image build -t localhost/my-image:functional-480092 testdata/build --alsologtostderr: (3.689860264s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-480092 image build -t localhost/my-image:functional-480092 testdata/build --alsologtostderr:
I0111 07:33:59.801484  323533 out.go:360] Setting OutFile to fd 1 ...
I0111 07:33:59.802424  323533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.802439  323533 out.go:374] Setting ErrFile to fd 2...
I0111 07:33:59.802445  323533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0111 07:33:59.802744  323533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
I0111 07:33:59.803652  323533 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.805161  323533 config.go:182] Loaded profile config "functional-480092": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0111 07:33:59.805755  323533 cli_runner.go:164] Run: docker container inspect functional-480092 --format={{.State.Status}}
I0111 07:33:59.834910  323533 ssh_runner.go:195] Run: systemctl --version
I0111 07:33:59.834975  323533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-480092
I0111 07:33:59.857217  323533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/functional-480092/id_rsa Username:docker}
I0111 07:33:59.977178  323533 build_images.go:162] Building image from path: /tmp/build.1971476801.tar
I0111 07:33:59.977246  323533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0111 07:33:59.986724  323533 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1971476801.tar
I0111 07:33:59.990872  323533 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1971476801.tar: stat -c "%s %y" /var/lib/minikube/build/build.1971476801.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1971476801.tar': No such file or directory
I0111 07:33:59.990907  323533 ssh_runner.go:362] scp /tmp/build.1971476801.tar --> /var/lib/minikube/build/build.1971476801.tar (3072 bytes)
I0111 07:34:00.061797  323533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1971476801
I0111 07:34:00.104893  323533 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1971476801 -xf /var/lib/minikube/build/build.1971476801.tar
I0111 07:34:00.154650  323533 docker.go:364] Building image: /var/lib/minikube/build/build.1971476801
I0111 07:34:00.154737  323533 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-480092 /var/lib/minikube/build/build.1971476801
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:2d37e2d808831bfc7bd71dbf73670780ff1874791f54337282846f6c5a54128a done
#8 naming to localhost/my-image:functional-480092 done
#8 DONE 0.1s
I0111 07:34:03.395873  323533 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-480092 /var/lib/minikube/build/build.1971476801: (3.241099014s)
I0111 07:34:03.395944  323533 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1971476801
I0111 07:34:03.404272  323533 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1971476801.tar
I0111 07:34:03.413087  323533 build_images.go:218] Built localhost/my-image:functional-480092 from /tmp/build.1971476801.tar
I0111 07:34:03.413117  323533 build_images.go:134] succeeded building to: functional-480092
I0111 07:34:03.413123  323533 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 update-context --alsologtostderr -v=2
E0111 07:34:01.173126  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-480092 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-480092 docker-env) && out/minikube-linux-arm64 status -p functional-480092"
2026/01/11 07:33:56 [DEBUG] GET http://127.0.0.1:41139/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-480092 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-480092
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-480092
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-480092
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (151.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0111 07:36:17.322895  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m30.634190898s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (151.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 kubectl -- rollout status deployment/busybox: (4.458623707s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-25jgk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-bsvpb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-vzclb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-25jgk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-bsvpb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-vzclb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-25jgk -- nslookup kubernetes.default.svc.cluster.local
E0111 07:36:45.013434  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-bsvpb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-vzclb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-25jgk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-25jgk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-bsvpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-bsvpb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-vzclb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 kubectl -- exec busybox-769dd8b7dd-vzclb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 node add --alsologtostderr -v 5: (34.759619972s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5: (1.041631907s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-678999 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.085957009s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 status --output json --alsologtostderr -v 5: (1.06113096s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp testdata/cp-test.txt ha-678999:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile7843538/001/cp-test_ha-678999.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999:/home/docker/cp-test.txt ha-678999-m02:/home/docker/cp-test_ha-678999_ha-678999-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test_ha-678999_ha-678999-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999:/home/docker/cp-test.txt ha-678999-m03:/home/docker/cp-test_ha-678999_ha-678999-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test_ha-678999_ha-678999-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999:/home/docker/cp-test.txt ha-678999-m04:/home/docker/cp-test_ha-678999_ha-678999-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test_ha-678999_ha-678999-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp testdata/cp-test.txt ha-678999-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile7843538/001/cp-test_ha-678999-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m02:/home/docker/cp-test.txt ha-678999:/home/docker/cp-test_ha-678999-m02_ha-678999.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test_ha-678999-m02_ha-678999.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m02:/home/docker/cp-test.txt ha-678999-m03:/home/docker/cp-test_ha-678999-m02_ha-678999-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test_ha-678999-m02_ha-678999-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m02:/home/docker/cp-test.txt ha-678999-m04:/home/docker/cp-test_ha-678999-m02_ha-678999-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test_ha-678999-m02_ha-678999-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp testdata/cp-test.txt ha-678999-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile7843538/001/cp-test_ha-678999-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m03:/home/docker/cp-test.txt ha-678999:/home/docker/cp-test_ha-678999-m03_ha-678999.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test_ha-678999-m03_ha-678999.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m03:/home/docker/cp-test.txt ha-678999-m02:/home/docker/cp-test_ha-678999-m03_ha-678999-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test_ha-678999-m03_ha-678999-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m03:/home/docker/cp-test.txt ha-678999-m04:/home/docker/cp-test_ha-678999-m03_ha-678999-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test_ha-678999-m03_ha-678999-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp testdata/cp-test.txt ha-678999-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile7843538/001/cp-test_ha-678999-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m04:/home/docker/cp-test.txt ha-678999:/home/docker/cp-test_ha-678999-m04_ha-678999.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999 "sudo cat /home/docker/cp-test_ha-678999-m04_ha-678999.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m04:/home/docker/cp-test.txt ha-678999-m02:/home/docker/cp-test_ha-678999-m04_ha-678999-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m02 "sudo cat /home/docker/cp-test_ha-678999-m04_ha-678999-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 cp ha-678999-m04:/home/docker/cp-test.txt ha-678999-m03:/home/docker/cp-test_ha-678999-m04_ha-678999-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 ssh -n ha-678999-m03 "sudo cat /home/docker/cp-test_ha-678999-m04_ha-678999-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 node stop m02 --alsologtostderr -v 5: (11.272627634s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5: exit status 7 (801.319736ms)

                                                
                                                
-- stdout --
	ha-678999
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-678999-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678999-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-678999-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:37:56.587047  345613 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:37:56.587180  345613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:37:56.587191  345613 out.go:374] Setting ErrFile to fd 2...
	I0111 07:37:56.587197  345613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:37:56.587463  345613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:37:56.587673  345613 out.go:368] Setting JSON to false
	I0111 07:37:56.587702  345613 mustload.go:66] Loading cluster: ha-678999
	I0111 07:37:56.587862  345613 notify.go:221] Checking for updates...
	I0111 07:37:56.588114  345613 config.go:182] Loaded profile config "ha-678999": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:37:56.588131  345613 status.go:174] checking status of ha-678999 ...
	I0111 07:37:56.588939  345613 cli_runner.go:164] Run: docker container inspect ha-678999 --format={{.State.Status}}
	I0111 07:37:56.614431  345613 status.go:371] ha-678999 host status = "Running" (err=<nil>)
	I0111 07:37:56.614455  345613 host.go:66] Checking if "ha-678999" exists ...
	I0111 07:37:56.614759  345613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678999
	I0111 07:37:56.636512  345613 host.go:66] Checking if "ha-678999" exists ...
	I0111 07:37:56.636829  345613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:37:56.636885  345613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678999
	I0111 07:37:56.665438  345613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/ha-678999/id_rsa Username:docker}
	I0111 07:37:56.772726  345613 ssh_runner.go:195] Run: systemctl --version
	I0111 07:37:56.779372  345613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:37:56.792576  345613 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:37:56.851309  345613 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-11 07:37:56.841515704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:37:56.851855  345613 kubeconfig.go:125] found "ha-678999" server: "https://192.168.49.254:8443"
	I0111 07:37:56.851899  345613 api_server.go:166] Checking apiserver status ...
	I0111 07:37:56.851945  345613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:37:56.866100  345613 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2141/cgroup
	I0111 07:37:56.874962  345613 api_server.go:192] apiserver freezer: "11:freezer:/docker/f24207fc9c75e730adbae8ef28f203524f8e4e5add24d77f8822fcb224709930/kubepods/burstable/pod3610c6bfb30d8cf826e9ba2a100cc730/c4f1c9aa49859771dec6a8318a09413e5d77af985172457e4a004618ff7ff009"
	I0111 07:37:56.875033  345613 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f24207fc9c75e730adbae8ef28f203524f8e4e5add24d77f8822fcb224709930/kubepods/burstable/pod3610c6bfb30d8cf826e9ba2a100cc730/c4f1c9aa49859771dec6a8318a09413e5d77af985172457e4a004618ff7ff009/freezer.state
	I0111 07:37:56.882730  345613 api_server.go:214] freezer state: "THAWED"
	I0111 07:37:56.882758  345613 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:37:56.894009  345613 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:37:56.894039  345613 status.go:463] ha-678999 apiserver status = Running (err=<nil>)
	I0111 07:37:56.894052  345613 status.go:176] ha-678999 status: &{Name:ha-678999 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:37:56.894096  345613 status.go:174] checking status of ha-678999-m02 ...
	I0111 07:37:56.894437  345613 cli_runner.go:164] Run: docker container inspect ha-678999-m02 --format={{.State.Status}}
	I0111 07:37:56.916791  345613 status.go:371] ha-678999-m02 host status = "Stopped" (err=<nil>)
	I0111 07:37:56.916837  345613 status.go:384] host is not running, skipping remaining checks
	I0111 07:37:56.916844  345613 status.go:176] ha-678999-m02 status: &{Name:ha-678999-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:37:56.916864  345613 status.go:174] checking status of ha-678999-m03 ...
	I0111 07:37:56.917208  345613 cli_runner.go:164] Run: docker container inspect ha-678999-m03 --format={{.State.Status}}
	I0111 07:37:56.937848  345613 status.go:371] ha-678999-m03 host status = "Running" (err=<nil>)
	I0111 07:37:56.937871  345613 host.go:66] Checking if "ha-678999-m03" exists ...
	I0111 07:37:56.938172  345613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678999-m03
	I0111 07:37:56.955803  345613 host.go:66] Checking if "ha-678999-m03" exists ...
	I0111 07:37:56.956112  345613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:37:56.956157  345613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678999-m03
	I0111 07:37:56.974255  345613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/ha-678999-m03/id_rsa Username:docker}
	I0111 07:37:57.078261  345613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:37:57.094522  345613 kubeconfig.go:125] found "ha-678999" server: "https://192.168.49.254:8443"
	I0111 07:37:57.094551  345613 api_server.go:166] Checking apiserver status ...
	I0111 07:37:57.094592  345613 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:37:57.108588  345613 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2167/cgroup
	I0111 07:37:57.120296  345613 api_server.go:192] apiserver freezer: "11:freezer:/docker/35ba1f20fc5742fa87b9862307f1edd51056b52445f8205ed84815dad3080bd0/kubepods/burstable/pod0f36150a8cca533025de4c0a5f55a991/773d1617b62780e2db9ede1fdc3dda4db31520e50cf429fdcac78a821c6c11b1"
	I0111 07:37:57.120371  345613 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/35ba1f20fc5742fa87b9862307f1edd51056b52445f8205ed84815dad3080bd0/kubepods/burstable/pod0f36150a8cca533025de4c0a5f55a991/773d1617b62780e2db9ede1fdc3dda4db31520e50cf429fdcac78a821c6c11b1/freezer.state
	I0111 07:37:57.130337  345613 api_server.go:214] freezer state: "THAWED"
	I0111 07:37:57.130367  345613 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0111 07:37:57.138648  345613 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0111 07:37:57.138680  345613 status.go:463] ha-678999-m03 apiserver status = Running (err=<nil>)
	I0111 07:37:57.138691  345613 status.go:176] ha-678999-m03 status: &{Name:ha-678999-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:37:57.138707  345613 status.go:174] checking status of ha-678999-m04 ...
	I0111 07:37:57.139148  345613 cli_runner.go:164] Run: docker container inspect ha-678999-m04 --format={{.State.Status}}
	I0111 07:37:57.167645  345613 status.go:371] ha-678999-m04 host status = "Running" (err=<nil>)
	I0111 07:37:57.167671  345613 host.go:66] Checking if "ha-678999-m04" exists ...
	I0111 07:37:57.167985  345613 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-678999-m04
	I0111 07:37:57.187202  345613 host.go:66] Checking if "ha-678999-m04" exists ...
	I0111 07:37:57.187533  345613 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:37:57.187579  345613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-678999-m04
	I0111 07:37:57.205704  345613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/ha-678999-m04/id_rsa Username:docker}
	I0111 07:37:57.308573  345613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:37:57.337626  345613 status.go:176] ha-678999-m04 status: &{Name:ha-678999-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node start m02 --alsologtostderr -v 5
E0111 07:38:14.559608  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.564986  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.575255  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.595533  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.635795  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.716170  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:14.876624  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:15.197304  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:15.837765  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:17.118812  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:19.679515  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:24.800425  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:38:35.040575  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 node start m02 --alsologtostderr -v 5: (47.305722824s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5: (1.164252618s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.357444315s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 stop --alsologtostderr -v 5
E0111 07:38:55.520755  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 stop --alsologtostderr -v 5: (35.2201201s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 start --wait true --alsologtostderr -v 5
E0111 07:39:36.481555  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:40:58.402013  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 start --wait true --alsologtostderr -v 5: (1m53.55839014s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (148.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node delete m03 --alsologtostderr -v 5
E0111 07:41:17.322205  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 node delete m03 --alsologtostderr -v 5: (10.575245654s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 stop --alsologtostderr -v 5: (33.347216267s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5: exit status 7 (116.220354ms)

                                                
                                                
-- stdout --
	ha-678999
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678999-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-678999-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:42:02.857609  373058 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:42:02.857747  373058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:02.857757  373058 out.go:374] Setting ErrFile to fd 2...
	I0111 07:42:02.857763  373058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:42:02.858054  373058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:42:02.858271  373058 out.go:368] Setting JSON to false
	I0111 07:42:02.858302  373058 mustload.go:66] Loading cluster: ha-678999
	I0111 07:42:02.858345  373058 notify.go:221] Checking for updates...
	I0111 07:42:02.858775  373058 config.go:182] Loaded profile config "ha-678999": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:42:02.858821  373058 status.go:174] checking status of ha-678999 ...
	I0111 07:42:02.859418  373058 cli_runner.go:164] Run: docker container inspect ha-678999 --format={{.State.Status}}
	I0111 07:42:02.879107  373058 status.go:371] ha-678999 host status = "Stopped" (err=<nil>)
	I0111 07:42:02.879128  373058 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:02.879135  373058 status.go:176] ha-678999 status: &{Name:ha-678999 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:02.879177  373058 status.go:174] checking status of ha-678999-m02 ...
	I0111 07:42:02.879488  373058 cli_runner.go:164] Run: docker container inspect ha-678999-m02 --format={{.State.Status}}
	I0111 07:42:02.902183  373058 status.go:371] ha-678999-m02 host status = "Stopped" (err=<nil>)
	I0111 07:42:02.902200  373058 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:02.902207  373058 status.go:176] ha-678999-m02 status: &{Name:ha-678999-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:42:02.902224  373058 status.go:174] checking status of ha-678999-m04 ...
	I0111 07:42:02.902496  373058 cli_runner.go:164] Run: docker container inspect ha-678999-m04 --format={{.State.Status}}
	I0111 07:42:02.922780  373058 status.go:371] ha-678999-m04 host status = "Stopped" (err=<nil>)
	I0111 07:42:02.922801  373058 status.go:384] host is not running, skipping remaining checks
	I0111 07:42:02.922807  373058 status.go:176] ha-678999-m04 status: &{Name:ha-678999-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m6.570625994s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (60.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 node add --control-plane --alsologtostderr -v 5
E0111 07:43:14.554967  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 07:43:42.245443  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 node add --control-plane --alsologtostderr -v 5: (59.021714999s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-678999 status --alsologtostderr -v 5: (1.140826429s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (60.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.158520589s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.16s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-735462 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-735462 --driver=docker  --container-runtime=docker: (28.557355954s)
--- PASS: TestImageBuild/serial/Setup (28.56s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-735462
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-735462: (1.720751525s)
--- PASS: TestImageBuild/serial/NormalBuild (1.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-735462
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-735462
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-735462
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-735462: (1.049276827s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-435396 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-435396 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m9.662004711s)
--- PASS: TestJSONOutput/start/Command (69.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-435396 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-435396 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-435396 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-435396 --output=json --user=testUser: (6.139470259s)
--- PASS: TestJSONOutput/stop/Command (6.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-121160 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-121160 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.892549ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0f2b67cb-2061-4417-8035-9e84507f457f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-121160] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a5f5901-d161-47a3-b0df-6f6715e95225","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"dd2da349-54dc-47a0-a673-4af048cac197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1dbf99a4-06d7-4934-a22d-169ea72d5ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig"}}
	{"specversion":"1.0","id":"e6df74f8-6fd7-4a31-b890-e418c99c1a30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube"}}
	{"specversion":"1.0","id":"c01e451a-bb86-4b11-8e43-9c6335dbe988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6b5c7050-1d26-481f-a9f6-a726afad2d3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f11d4aa0-cd3d-431e-bace-5e99907d2d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-121160" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-121160
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-114116 --network=
E0111 07:46:17.322693  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-114116 --network=: (29.792722104s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-114116" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-114116
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-114116: (2.25659026s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-092542 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-092542 --network=bridge: (28.707026096s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-092542" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-092542
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-092542: (2.067686932s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.80s)

                                                
                                    
x
+
TestKicExistingNetwork (28.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0111 07:47:19.575325  278638 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0111 07:47:19.591224  278638 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0111 07:47:19.591314  278638 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0111 07:47:19.591338  278638 cli_runner.go:164] Run: docker network inspect existing-network
W0111 07:47:19.606893  278638 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0111 07:47:19.606932  278638 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0111 07:47:19.606947  278638 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0111 07:47:19.607051  278638 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0111 07:47:19.624009  278638 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4553382a3354 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ce:ef:e3:80:f0:4e} reservation:<nil>}
I0111 07:47:19.624301  278638 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a480c0}
I0111 07:47:19.624332  278638 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0111 07:47:19.624384  278638 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0111 07:47:19.694308  278638 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-333691 --network=existing-network
E0111 07:47:40.374952  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-333691 --network=existing-network: (26.280803477s)
helpers_test.go:176: Cleaning up "existing-network-333691" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-333691
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-333691: (2.109390997s)
I0111 07:47:48.100607  278638 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (28.54s)

                                                
                                    
x
+
TestKicCustomSubnet (30.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-820014 --subnet=192.168.60.0/24
E0111 07:48:14.555133  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-820014 --subnet=192.168.60.0/24: (28.439524938s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-820014 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-820014" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-820014
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-820014: (2.19064957s)
--- PASS: TestKicCustomSubnet (30.66s)

                                                
                                    
x
+
TestKicStaticIP (30.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-930264 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-930264 --static-ip=192.168.200.200: (28.28378778s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-930264 ip
helpers_test.go:176: Cleaning up "static-ip-930264" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-930264
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-930264: (2.124555128s)
--- PASS: TestKicStaticIP (30.60s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (63.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-117438 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-117438 --driver=docker  --container-runtime=docker: (28.48644875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-119891 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-119891 --driver=docker  --container-runtime=docker: (29.137248958s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-117438
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-119891
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-119891" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-119891
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-119891: (2.409896209s)
helpers_test.go:176: Cleaning up "first-117438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-117438
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-117438: (2.259089834s)
--- PASS: TestMinikubeProfile (63.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-759992 --memory=3072 --mount-string /tmp/TestMountStartserial2553620837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-759992 --memory=3072 --mount-string /tmp/TestMountStartserial2553620837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.47372039s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-759992 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-762068 --memory=3072 --mount-string /tmp/TestMountStartserial2553620837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-762068 --memory=3072 --mount-string /tmp/TestMountStartserial2553620837/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.0536648s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-762068 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-759992 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-759992 --alsologtostderr -v=5: (1.558630806s)
--- PASS: TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-762068 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-762068
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-762068: (1.287702099s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-762068
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-762068: (7.69723325s)
--- PASS: TestMountStart/serial/RestartStopped (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-762068 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633599 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0111 07:51:17.322925  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633599 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.101708595s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-633599 -- rollout status deployment/busybox: (4.070964647s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-hrjs9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-z2jfv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-hrjs9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-z2jfv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-hrjs9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-z2jfv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-hrjs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-hrjs9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-z2jfv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-633599 -- exec busybox-769dd8b7dd-z2jfv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-633599 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-633599 -v=5 --alsologtostderr: (34.182319553s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-633599 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp testdata/cp-test.txt multinode-633599:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1381659179/001/cp-test_multinode-633599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599:/home/docker/cp-test.txt multinode-633599-m02:/home/docker/cp-test_multinode-633599_multinode-633599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test_multinode-633599_multinode-633599-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599:/home/docker/cp-test.txt multinode-633599-m03:/home/docker/cp-test_multinode-633599_multinode-633599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test_multinode-633599_multinode-633599-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp testdata/cp-test.txt multinode-633599-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1381659179/001/cp-test_multinode-633599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m02:/home/docker/cp-test.txt multinode-633599:/home/docker/cp-test_multinode-633599-m02_multinode-633599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test_multinode-633599-m02_multinode-633599.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m02:/home/docker/cp-test.txt multinode-633599-m03:/home/docker/cp-test_multinode-633599-m02_multinode-633599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test_multinode-633599-m02_multinode-633599-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp testdata/cp-test.txt multinode-633599-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1381659179/001/cp-test_multinode-633599-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m03:/home/docker/cp-test.txt multinode-633599:/home/docker/cp-test_multinode-633599-m03_multinode-633599.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599 "sudo cat /home/docker/cp-test_multinode-633599-m03_multinode-633599.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 cp multinode-633599-m03:/home/docker/cp-test.txt multinode-633599-m02:/home/docker/cp-test_multinode-633599-m03_multinode-633599-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 ssh -n multinode-633599-m02 "sudo cat /home/docker/cp-test_multinode-633599-m03_multinode-633599-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-633599 node stop m03: (1.331125377s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633599 status: exit status 7 (586.593127ms)

                                                
                                                
-- stdout --
	multinode-633599
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-633599-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-633599-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr: exit status 7 (570.39155ms)

                                                
                                                
-- stdout --
	multinode-633599
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-633599-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-633599-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:52:46.239235  446234 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:52:46.239432  446234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:52:46.239458  446234 out.go:374] Setting ErrFile to fd 2...
	I0111 07:52:46.239481  446234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:52:46.240721  446234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:52:46.241071  446234 out.go:368] Setting JSON to false
	I0111 07:52:46.241118  446234 mustload.go:66] Loading cluster: multinode-633599
	I0111 07:52:46.241949  446234 config.go:182] Loaded profile config "multinode-633599": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:52:46.241995  446234 status.go:174] checking status of multinode-633599 ...
	I0111 07:52:46.243056  446234 cli_runner.go:164] Run: docker container inspect multinode-633599 --format={{.State.Status}}
	I0111 07:52:46.254808  446234 notify.go:221] Checking for updates...
	I0111 07:52:46.275775  446234 status.go:371] multinode-633599 host status = "Running" (err=<nil>)
	I0111 07:52:46.275798  446234 host.go:66] Checking if "multinode-633599" exists ...
	I0111 07:52:46.276113  446234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-633599
	I0111 07:52:46.306974  446234 host.go:66] Checking if "multinode-633599" exists ...
	I0111 07:52:46.307507  446234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:52:46.307591  446234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-633599
	I0111 07:52:46.327225  446234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/multinode-633599/id_rsa Username:docker}
	I0111 07:52:46.433074  446234 ssh_runner.go:195] Run: systemctl --version
	I0111 07:52:46.439427  446234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:52:46.453821  446234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0111 07:52:46.519539  446234 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-11 07:52:46.509010198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0111 07:52:46.520074  446234 kubeconfig.go:125] found "multinode-633599" server: "https://192.168.67.2:8443"
	I0111 07:52:46.520117  446234 api_server.go:166] Checking apiserver status ...
	I0111 07:52:46.520163  446234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0111 07:52:46.533577  446234 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2105/cgroup
	I0111 07:52:46.541652  446234 api_server.go:192] apiserver freezer: "11:freezer:/docker/1d8d697fa25f5e8f116fb582d94039bbc91319024f74e3aa2a3dc2de1868d815/kubepods/burstable/pod6a7d4a7b7a4f71b1f2275001c68ea27d/a73fa5493fe22bb6eb0ddb1bdcb80ea074c5b95bbb5fb30807f46ee23713b0c0"
	I0111 07:52:46.541728  446234 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1d8d697fa25f5e8f116fb582d94039bbc91319024f74e3aa2a3dc2de1868d815/kubepods/burstable/pod6a7d4a7b7a4f71b1f2275001c68ea27d/a73fa5493fe22bb6eb0ddb1bdcb80ea074c5b95bbb5fb30807f46ee23713b0c0/freezer.state
	I0111 07:52:46.549815  446234 api_server.go:214] freezer state: "THAWED"
	I0111 07:52:46.549843  446234 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0111 07:52:46.558392  446234 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0111 07:52:46.558425  446234 status.go:463] multinode-633599 apiserver status = Running (err=<nil>)
	I0111 07:52:46.558437  446234 status.go:176] multinode-633599 status: &{Name:multinode-633599 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:52:46.558454  446234 status.go:174] checking status of multinode-633599-m02 ...
	I0111 07:52:46.558782  446234 cli_runner.go:164] Run: docker container inspect multinode-633599-m02 --format={{.State.Status}}
	I0111 07:52:46.576617  446234 status.go:371] multinode-633599-m02 host status = "Running" (err=<nil>)
	I0111 07:52:46.576654  446234 host.go:66] Checking if "multinode-633599-m02" exists ...
	I0111 07:52:46.576970  446234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-633599-m02
	I0111 07:52:46.596598  446234 host.go:66] Checking if "multinode-633599-m02" exists ...
	I0111 07:52:46.596928  446234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0111 07:52:46.596972  446234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-633599-m02
	I0111 07:52:46.614172  446234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/22402-276769/.minikube/machines/multinode-633599-m02/id_rsa Username:docker}
	I0111 07:52:46.724385  446234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0111 07:52:46.737494  446234 status.go:176] multinode-633599-m02 status: &{Name:multinode-633599-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:52:46.737528  446234 status.go:174] checking status of multinode-633599-m03 ...
	I0111 07:52:46.737887  446234 cli_runner.go:164] Run: docker container inspect multinode-633599-m03 --format={{.State.Status}}
	I0111 07:52:46.754598  446234 status.go:371] multinode-633599-m03 host status = "Stopped" (err=<nil>)
	I0111 07:52:46.754621  446234 status.go:384] host is not running, skipping remaining checks
	I0111 07:52:46.754627  446234 status.go:176] multinode-633599-m03 status: &{Name:multinode-633599-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-633599 node start m03 -v=5 --alsologtostderr: (8.656870107s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633599
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-633599
E0111 07:53:14.555096  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-633599: (23.057758158s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633599 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633599 --wait=true -v=5 --alsologtostderr: (57.498642054s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633599
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-633599 node delete m03: (5.188301813s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 stop
E0111 07:54:37.606230  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-633599 stop: (21.768733499s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633599 status: exit status 7 (98.226936ms)

                                                
                                                
-- stdout --
	multinode-633599
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-633599-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr: exit status 7 (85.062351ms)

                                                
                                                
-- stdout --
	multinode-633599
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-633599-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0111 07:54:44.713801  460033 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:54:44.713947  460033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:54:44.713959  460033 out.go:374] Setting ErrFile to fd 2...
	I0111 07:54:44.713990  460033 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:54:44.714257  460033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:54:44.714481  460033 out.go:368] Setting JSON to false
	I0111 07:54:44.714523  460033 mustload.go:66] Loading cluster: multinode-633599
	I0111 07:54:44.714597  460033 notify.go:221] Checking for updates...
	I0111 07:54:44.715779  460033 config.go:182] Loaded profile config "multinode-633599": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:54:44.715799  460033 status.go:174] checking status of multinode-633599 ...
	I0111 07:54:44.716463  460033 cli_runner.go:164] Run: docker container inspect multinode-633599 --format={{.State.Status}}
	I0111 07:54:44.733880  460033 status.go:371] multinode-633599 host status = "Stopped" (err=<nil>)
	I0111 07:54:44.733911  460033 status.go:384] host is not running, skipping remaining checks
	I0111 07:54:44.733918  460033 status.go:176] multinode-633599 status: &{Name:multinode-633599 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0111 07:54:44.733949  460033 status.go:174] checking status of multinode-633599-m02 ...
	I0111 07:54:44.734269  460033 cli_runner.go:164] Run: docker container inspect multinode-633599-m02 --format={{.State.Status}}
	I0111 07:54:44.752542  460033 status.go:371] multinode-633599-m02 host status = "Stopped" (err=<nil>)
	I0111 07:54:44.752561  460033 status.go:384] host is not running, skipping remaining checks
	I0111 07:54:44.752568  460033 status.go:176] multinode-633599-m02 status: &{Name:multinode-633599-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633599 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633599 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (54.112804242s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-633599 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-633599
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633599-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-633599-m02 --driver=docker  --container-runtime=docker: exit status 14 (103.180474ms)

                                                
                                                
-- stdout --
	* [multinode-633599-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-633599-m02' is duplicated with machine name 'multinode-633599-m02' in profile 'multinode-633599'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-633599-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-633599-m03 --driver=docker  --container-runtime=docker: (29.838025815s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-633599
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-633599: exit status 80 (357.848663ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-633599 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-633599-m03 already exists in multinode-633599-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-633599-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-633599-m03: (2.27559532s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.64s)

                                                
                                    
x
+
TestScheduledStopUnix (102.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-507984 --memory=3072 --driver=docker  --container-runtime=docker
E0111 07:56:17.323063  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-507984 --memory=3072 --driver=docker  --container-runtime=docker: (29.648311412s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507984 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:56:46.065410  473915 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:46.065651  473915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:46.065678  473915 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:46.065700  473915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:46.065985  473915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:56:46.066297  473915 out.go:368] Setting JSON to false
	I0111 07:56:46.066474  473915 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:56:46.066973  473915 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:56:46.067098  473915 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/scheduled-stop-507984/config.json ...
	I0111 07:56:46.067333  473915 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:56:46.067505  473915 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-507984 -n scheduled-stop-507984
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507984 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:56:46.529679  474005 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:56:46.529821  474005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:46.529834  474005 out.go:374] Setting ErrFile to fd 2...
	I0111 07:56:46.529841  474005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:56:46.530234  474005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:56:46.533377  474005 out.go:368] Setting JSON to false
	I0111 07:56:46.533600  474005 daemonize_unix.go:73] killing process 473931 as it is an old scheduled stop
	I0111 07:56:46.533695  474005 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:56:46.534154  474005 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:56:46.534238  474005 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/scheduled-stop-507984/config.json ...
	I0111 07:56:46.534425  474005 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:56:46.534583  474005 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0111 07:56:46.545608  278638 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/scheduled-stop-507984/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507984 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507984 -n scheduled-stop-507984
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-507984
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-507984 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0111 07:57:12.455441  474738 out.go:360] Setting OutFile to fd 1 ...
	I0111 07:57:12.455688  474738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:57:12.455732  474738 out.go:374] Setting ErrFile to fd 2...
	I0111 07:57:12.455753  474738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0111 07:57:12.456776  474738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22402-276769/.minikube/bin
	I0111 07:57:12.457121  474738 out.go:368] Setting JSON to false
	I0111 07:57:12.457284  474738 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:57:12.457707  474738 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0111 07:57:12.457826  474738 profile.go:143] Saving config to /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/scheduled-stop-507984/config.json ...
	I0111 07:57:12.458054  474738 mustload.go:66] Loading cluster: scheduled-stop-507984
	I0111 07:57:12.458215  474738 config.go:182] Loaded profile config "scheduled-stop-507984": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-507984
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-507984: exit status 7 (71.795311ms)

                                                
                                                
-- stdout --
	scheduled-stop-507984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507984 -n scheduled-stop-507984
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-507984 -n scheduled-stop-507984: exit status 7 (69.786253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-507984" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-507984
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-507984: (1.651316397s)
--- PASS: TestScheduledStopUnix (102.90s)

                                                
                                    
x
+
TestSkaffold (137.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe659210445 version
skaffold_test.go:63: skaffold version: v2.17.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-299813 --memory=3072 --driver=docker  --container-runtime=docker
E0111 07:58:14.554465  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-299813 --memory=3072 --driver=docker  --container-runtime=docker: (29.467999149s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe659210445 run --minikube-profile skaffold-299813 --kube-context skaffold-299813 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe659210445 run --minikube-profile skaffold-299813 --kube-context skaffold-299813 --status-check=true --port-forward=false --interactive=false: (1m32.42346715s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-67b448f556-ddncg" [0ceb040f-76de-42ac-b668-63e01594f14b] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003174169s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-6b88685b6-fs48r" [78e84b48-6001-41b3-83ab-f4795c72af68] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002994903s
helpers_test.go:176: Cleaning up "skaffold-299813" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-299813
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-299813: (3.016116201s)
--- PASS: TestSkaffold (137.59s)

                                                
                                    
x
+
TestInsufficientStorage (10.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-621399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-621399 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.570456536s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"661de627-24ab-4057-aee6-5aca40c5adbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-621399] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8e268ac-466d-4d92-8ac8-d14c5e785ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22402"}}
	{"specversion":"1.0","id":"d0caeea1-fd62-4822-84c1-856bb2a951fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aa50f598-f046-4a04-b7dd-013b3f591c04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig"}}
	{"specversion":"1.0","id":"c8ffc781-4a4b-4db3-aeb4-406e256434c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube"}}
	{"specversion":"1.0","id":"3613fddf-4505-41de-b302-9c80c5abea9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3efc06e8-56dd-4bca-bb08-9d9d4c8cc378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ea98073a-db70-48db-b9e5-a3ccc827efb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"122e6273-154f-43a4-82fb-e7f3e1f0a4bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b59f3e37-3a1a-4638-8875-3e1001465cf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"25d31456-f1d6-441e-94c0-226fee9b2fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"583a113b-959f-496d-a4f2-25426de4efae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-621399\" primary control-plane node in \"insufficient-storage-621399\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"546770c5-0881-4245-91ed-72d840b1795c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1768032998-22402 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"637f0f54-6bcb-41ae-9595-5321b86e9198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a78cf82-f1b8-4bac-8a76-ef9faa9b8c88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621399 --output=json --layout=cluster: exit status 7 (313.127939ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:00:25.715402  485316 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621399" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-621399 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-621399 --output=json --layout=cluster: exit status 7 (301.297618ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-621399","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-621399","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0111 08:00:26.016115  485383 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-621399" does not appear in /home/jenkins/minikube-integration/22402-276769/kubeconfig
	E0111 08:00:26.027242  485383 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/insufficient-storage-621399/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-621399" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-621399
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-621399: (1.761293573s)
--- PASS: TestInsufficientStorage (10.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (367.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1429908698 start -p running-upgrade-438290 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1429908698 start -p running-upgrade-438290 --memory=3072 --vm-driver=docker  --container-runtime=docker: (59.93254147s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-438290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0111 08:13:14.555248  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-438290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5m4.849950656s)
helpers_test.go:176: Cleaning up "running-upgrade-438290" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-438290
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-438290: (2.11016447s)
--- PASS: TestRunningBinaryUpgrade (367.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.359945383s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-682728 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-682728 --alsologtostderr: (2.177931513s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-682728 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-682728 status --format={{.Host}}: exit status 7 (67.947503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m48.201532965s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-682728 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (96.443624ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-682728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-682728
	    minikube start -p kubernetes-upgrade-682728 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6827282 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-682728 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-682728 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.047305088s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-682728" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-682728
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-682728: (2.468012813s)
--- PASS: TestKubernetesUpgrade (177.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.250366213 start -p missing-upgrade-079174 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.250366213 start -p missing-upgrade-079174 --memory=3072 --driver=docker  --container-runtime=docker: (35.957373278s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-079174
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-079174: (1.689193772s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-079174
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-079174 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0111 08:16:17.322231  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-079174 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.036377832s)
helpers_test.go:176: Cleaning up "missing-upgrade-079174" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-079174
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-079174: (2.530271308s)
--- PASS: TestMissingContainerUpgrade (90.06s)

                                                
                                    
x
+
TestPause/serial/Start (55.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019995 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0111 08:01:17.322920  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-019995 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (55.804036652s)
--- PASS: TestPause/serial/Start (55.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-019995 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-019995 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.037697411s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (151.280106ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-616586] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22402
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22402-276769/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22402-276769/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-616586 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-616586 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.331606248s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-616586 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.82s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-019995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-019995 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-019995 --output=json --layout=cluster: exit status 2 (466.527127ms)

                                                
                                                
-- stdout --
	{"Name":"pause-019995","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-019995","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-019995 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.18s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-019995 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-019995 --alsologtostderr -v=5: (1.182080542s)
--- PASS: TestPause/serial/PauseAgain (1.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.58s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-019995 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-019995 --alsologtostderr -v=5: (2.580430016s)
--- PASS: TestPause/serial/DeletePaused (2.58s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.498140617s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-019995
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-019995: exit status 1 (22.235505ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-019995: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (11.035783069s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-616586 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-616586 status -o json: exit status 2 (360.190966ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-616586","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-616586
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-616586: (1.909905935s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-616586 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (9.685081629s)
--- PASS: TestNoKubernetes/serial/Start (9.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22402-276769/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-616586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-616586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.417397ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-616586
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-616586: (1.309574434s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-616586 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-616586 --driver=docker  --container-runtime=docker: (7.621281359s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-616586 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-616586 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.790222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (136.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2714624694 start -p stopped-upgrade-974160 --memory=3072 --vm-driver=docker  --container-runtime=docker
E0111 08:18:14.554997  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2714624694 start -p stopped-upgrade-974160 --memory=3072 --vm-driver=docker  --container-runtime=docker: (30.650561054s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2714624694 -p stopped-upgrade-974160 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2714624694 -p stopped-upgrade-974160 stop: (1.95156124s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-974160 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-974160 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m43.733132797s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (136.34s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (89.22s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-182980 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E0111 08:20:02.800904  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-182980 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m17.23386245s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-182980 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-182980
E0111 08:21:17.322749  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-182980: (11.102003293s)
--- PASS: TestPreload/Start-NoPreload-PullImage (89.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-974160
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-974160: (2.067591097s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0111 08:21:00.376328  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (48.196222452s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-195160 "pgrep -a kubelet"
I0111 08:21:19.729755  278638 config.go:182] Loaded profile config "auto-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gcgfs" [34a79827-e571-40b6-aba0-79ea257e7ed9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gcgfs" [34a79827-e571-40b6-aba0-79ea257e7ed9] Running
E0111 08:21:25.846978  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004676553s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (56.83s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-182980 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-182980 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (56.535585458s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-182980 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (56.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.185250192s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.525940463s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-zsdwb" [04d54f01-1a8a-4fdc-a90f-4f93fdde78d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003360844s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-195160 "pgrep -a kubelet"
I0111 08:22:58.361590  278638 config.go:182] Loaded profile config "kindnet-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-k892p" [7b930c2c-7710-45ba-a545-c83d71db5d80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-k892p" [7b930c2c-7710-45ba-a545-c83d71db5d80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004732274s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (49.70034664s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-c2rkm" [4aa748c0-8941-49c1-884d-365c3e103df8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004244587s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-195160 "pgrep -a kubelet"
I0111 08:23:42.867204  278638 config.go:182] Loaded profile config "calico-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-rtb54" [6237833b-2715-41d8-916e-e052979d007e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-rtb54" [6237833b-2715-41d8-916e-e052979d007e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004030034s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (73.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m13.216861158s)
--- PASS: TestNetworkPlugins/group/false/Start (73.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-195160 "pgrep -a kubelet"
I0111 08:24:25.055337  278638 config.go:182] Loaded profile config "custom-flannel-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5tzf9" [3d22b7ff-bb77-4b1a-a13c-6a17f3980583] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5tzf9" [3d22b7ff-bb77-4b1a-a13c-6a17f3980583] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004707225s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m7.834738128s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-195160 "pgrep -a kubelet"
I0111 08:25:35.516051  278638 config.go:182] Loaded profile config "false-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nfkvn" [7df8fa8d-f078-47f2-96dd-0725c8e9d395] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-nfkvn" [7df8fa8d-f078-47f2-96dd-0725c8e9d395] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004961212s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (49.72576693s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-195160 "pgrep -a kubelet"
I0111 08:26:11.774006  278638 config.go:182] Loaded profile config "enable-default-cni-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-195160 replace --force -f testdata/netcat-deployment.yaml
I0111 08:26:12.153539  278638 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pjbkc" [206ad414-74f6-4bdf-9975-09b6e1bba2b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pjbkc" [206ad414-74f6-4bdf-9975-09b6e1bba2b5] Running
E0111 08:26:17.322557  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:19.982095  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:19.987648  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:19.997954  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:20.018323  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:20.058863  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:20.139143  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:20.300207  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:20.620620  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:21.260795  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:26:22.540959  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003667419s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (48.759575255s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-vmmdr" [8fb40cb6-0115-4f18-86ee-6891719489c8] Running
E0111 08:27:00.942232  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003742293s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-195160 "pgrep -a kubelet"
I0111 08:27:03.693476  278638 config.go:182] Loaded profile config "flannel-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-qhnjg" [303d4d9f-c5a0-42bd-84eb-a04aec9416a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-qhnjg" [303d4d9f-c5a0-42bd-84eb-a04aec9416a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003605974s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-195160 "pgrep -a kubelet"
I0111 08:27:38.602777  278638 config.go:182] Loaded profile config "bridge-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bpx55" [bf1d75bd-c9d7-40b3-8a35-aaa45f9cf275] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-bpx55" [bf1d75bd-c9d7-40b3-8a35-aaa45f9cf275] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004390514s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (70.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0111 08:27:41.903042  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-195160 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m10.251573914s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (70.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (88.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-522232 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0111 08:28:32.884067  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kindnet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.555881  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.561612  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.571971  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.592246  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.632519  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.712812  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:36.873240  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:37.194710  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:37.835050  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:39.116238  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:41.676477  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:28:46.797330  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-522232 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m28.153864286s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (88.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-195160 "pgrep -a kubelet"
I0111 08:28:51.483601  278638 config.go:182] Loaded profile config "kubenet-195160": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-195160 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-nlvdf" [cd02bfde-acab-4e85-8730-b15940aeed96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-nlvdf" [cd02bfde-acab-4e85-8730-b15940aeed96] Running
E0111 08:28:57.038441  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003744979s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-195160 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-195160 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.37s)
E0111 08:34:53.095535  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:55.349213  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:35:02.801415  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-486447 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0111 08:29:25.410804  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.416094  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.426440  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.446690  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.486963  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.567839  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:25.728414  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:26.049479  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:26.690317  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:27.970891  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:30.531471  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:29:35.652007  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-486447 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m14.553756493s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-522232 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6e0067b4-f5f9-43aa-81bd-3bdd7c10e5fc] Pending
E0111 08:29:45.892943  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [6e0067b4-f5f9-43aa-81bd-3bdd7c10e5fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6e0067b4-f5f9-43aa-81bd-3bdd7c10e5fc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004076049s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-522232 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-522232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-522232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.320286117s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-522232 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-522232 --alsologtostderr -v=3
E0111 08:29:58.479558  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:02.800956  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/skaffold-299813/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:06.373945  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-522232 --alsologtostderr -v=3: (11.402832156s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-522232 -n old-k8s-version-522232
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-522232 -n old-k8s-version-522232: exit status 7 (77.625991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-522232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (30.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-522232 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0111 08:30:35.765611  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kindnet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:35.931343  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:35.936679  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:35.946951  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:35.967219  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:36.009578  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:36.089994  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:36.250398  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:36.571149  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:37.211871  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:38.492904  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-522232 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (30.137529596s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-522232 -n old-k8s-version-522232
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (30.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-486447 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [ea89d549-9b89-48df-874b-e10546191a29] Pending
helpers_test.go:353: "busybox" [ea89d549-9b89-48df-874b-e10546191a29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [ea89d549-9b89-48df-874b-e10546191a29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003658133s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-486447 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-rzwqd" [3588a998-f6b1-4361-b892-b27635cc5820] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0111 08:30:41.053719  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-rzwqd" [3588a998-f6b1-4361-b892-b27635cc5820] Running
E0111 08:30:46.174141  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:30:47.334891  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003338798s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-486447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-486447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003330625s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-486447 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-486447 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-486447 --alsologtostderr -v=3: (11.682931644s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-rzwqd" [3588a998-f6b1-4361-b892-b27635cc5820] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003660359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-522232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-522232 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-522232 --alsologtostderr -v=1
E0111 08:30:56.415133  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-522232 -n old-k8s-version-522232
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-522232 -n old-k8s-version-522232: exit status 2 (338.01338ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-522232 -n old-k8s-version-522232
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-522232 -n old-k8s-version-522232: exit status 2 (323.107167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-522232 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-522232 -n old-k8s-version-522232
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-522232 -n old-k8s-version-522232
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-486447 -n no-preload-486447
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-486447 -n no-preload-486447: exit status 7 (111.960305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-486447 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-486447 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-486447 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (57.193309837s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-486447 -n no-preload-486447
E0111 08:31:58.586772  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-794885 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0111 08:31:12.119030  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.124314  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.134663  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.154934  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.195194  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.276112  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.437270  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:12.757457  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:13.397895  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:14.678493  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:16.896246  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:17.239207  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:17.322887  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/addons-664377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:19.982305  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:20.400731  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:22.360324  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:32.600546  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:47.664570  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/auto-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:53.080831  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.307676  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.312949  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.323206  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.343538  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.383874  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.464308  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.625452  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.857031  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:31:57.946214  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-794885 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m15.3305164s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c7wfg" [f2d038fd-0a15-4a98-8ebf-1386abaa5bf3] Running
E0111 08:31:59.867697  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:02.427985  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003637289s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c7wfg" [f2d038fd-0a15-4a98-8ebf-1386abaa5bf3] Running
E0111 08:32:07.548743  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:09.255244  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00315801s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-486447 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-486447 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-486447 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-486447 -n no-preload-486447
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-486447 -n no-preload-486447: exit status 2 (354.534142ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-486447 -n no-preload-486447
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-486447 -n no-preload-486447: exit status 2 (342.528019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-486447 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-486447 -n no-preload-486447
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-486447 -n no-preload-486447
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-016823 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-016823 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m10.036959327s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-794885 create -f testdata/busybox.yaml
E0111 08:32:17.789568  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [56dd9f8b-b65c-4f13-87ac-0979b44eba26] Pending
helpers_test.go:353: "busybox" [56dd9f8b-b65c-4f13-87ac-0979b44eba26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [56dd9f8b-b65c-4f13-87ac-0979b44eba26] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004368855s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-794885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-794885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-794885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.340579553s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-794885 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-794885 --alsologtostderr -v=3
E0111 08:32:34.041084  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:38.269785  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.049781  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.054998  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.065245  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.085507  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.125739  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.206025  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.366267  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:39.686699  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:40.327697  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:41.608307  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-794885 --alsologtostderr -v=3: (11.571578681s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-794885 -n embed-certs-794885
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-794885 -n embed-certs-794885: exit status 7 (136.772981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-794885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-794885 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0111 08:32:44.169219  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:49.290276  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:51.920771  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kindnet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:32:59.530573  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:14.554303  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/functional-480092/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:19.229978  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:19.606642  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kindnet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:19.778125  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/false-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:20.010804  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-794885 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (52.705516611s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-794885 -n embed-certs-794885
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-016823 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c96985ba-ba4a-4376-ad93-c0f381183749] Pending
helpers_test.go:353: "busybox" [c96985ba-ba4a-4376-ad93-c0f381183749] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [c96985ba-ba4a-4376-ad93-c0f381183749] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003395118s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-016823 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ksqhp" [cb67849e-03ba-4ac0-94ad-f0457f07fe30] Running
E0111 08:33:36.554857  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003709434s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-016823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-016823 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-016823 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-016823 --alsologtostderr -v=3: (11.450456074s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-ksqhp" [cb67849e-03ba-4ac0-94ad-f0457f07fe30] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003253764s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-794885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-794885 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-794885 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-794885 -n embed-certs-794885
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-794885 -n embed-certs-794885: exit status 2 (334.815411ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-794885 -n embed-certs-794885
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-794885 -n embed-certs-794885: exit status 2 (335.818599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-794885 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-794885 -n embed-certs-794885
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-794885 -n embed-certs-794885
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823: exit status 7 (153.476077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-016823 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (36.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-016823 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-016823 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (36.009154321s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (36.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-615462 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0111 08:33:54.360348  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kubenet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:55.961405  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/enable-default-cni-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:33:56.921143  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kubenet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:00.971412  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/bridge-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:02.041938  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kubenet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:04.241732  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/calico-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:12.283027  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kubenet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:25.410946  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/custom-flannel-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-615462 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (40.75691903s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tkvqs" [ae4a3b92-a09d-4183-ad33-822679c8c2a0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003861973s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-tkvqs" [ae4a3b92-a09d-4183-ad33-822679c8c2a0] Running
E0111 08:34:32.763942  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/kubenet-195160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003655117s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-016823 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-615462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-615462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049060094s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-615462 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-615462 --alsologtostderr -v=3: (11.700909104s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-016823 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-016823 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823: exit status 2 (350.523047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823: exit status 2 (340.639009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-016823 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-016823 -n default-k8s-diff-port-016823
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.44s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-643861 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0111 08:34:45.084742  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.102031  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.119875  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.140296  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.181157  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.262991  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.424278  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:45.744747  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0111 08:34:46.385918  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-643861 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.186393991s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-643861" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-643861
--- PASS: TestPreload/PreloadSrc/gcs (4.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615462 -n newest-cni-615462
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615462 -n newest-cni-615462: exit status 7 (77.883228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-615462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-615462 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0111 08:34:47.666997  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-615462 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (16.688925108s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-615462 -n newest-cni-615462
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.15s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (3.97s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-130236 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0111 08:34:50.228032  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-130236 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (3.754346468s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-130236" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-130236
--- PASS: TestPreload/PreloadSrc/github (3.97s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.59s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-223101 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-223101" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-223101
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-615462 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-615462 --alsologtostderr -v=1
E0111 08:35:05.590007  278638 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/old-k8s-version-522232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615462 -n newest-cni-615462
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615462 -n newest-cni-615462: exit status 2 (328.311012ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615462 -n newest-cni-615462
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615462 -n newest-cni-615462: exit status 2 (340.651449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-615462 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-615462 -n newest-cni-615462
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-615462 -n newest-cni-615462
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-683064 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-683064" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-683064
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-195160 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-195160" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22402-276769/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 11 Jan 2026 08:02:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-616586
contexts:
- context:
cluster: NoKubernetes-616586
extensions:
- extension:
last-update: Sun, 11 Jan 2026 08:02:17 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-616586
name: NoKubernetes-616586
current-context: NoKubernetes-616586
kind: Config
preferences: {}
users:
- name: NoKubernetes-616586
user:
client-certificate: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/NoKubernetes-616586/client.crt
client-key: /home/jenkins/minikube-integration/22402-276769/.minikube/profiles/NoKubernetes-616586/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-195160

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-195160" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-195160"

                                                
                                                
----------------------- debugLogs end: cilium-195160 [took: 5.873467612s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-195160" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-195160
--- SKIP: TestNetworkPlugins/group/cilium (6.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-197266" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-197266
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard