Test Report: Docker_Linux_docker_arm64 22343

                    
                      72a35eba785b899784aeadb9114946ce54d68eef:2025-12-27:43008
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 508.16
53 TestForceSystemdEnv 511.21
x
+
TestForceSystemdFlag (508.16s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1227 09:57:53.494715  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:59:10.017107  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.043587  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.049275  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.059550  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.079845  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.120341  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.200727  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.361213  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:33.681938  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:34.322243  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:35.602722  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:38.163550  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:43.284752  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:00:53.525768  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:06.962280  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:14.006496  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:01:54.966739  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:02:53.494777  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:03:16.887002  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:05:33.039157  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m23.979438864s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:57:23.854045  769388 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:57:23.854214  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854225  769388 out.go:374] Setting ErrFile to fd 2...
	I1227 09:57:23.854241  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854500  769388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:57:23.854935  769388 out.go:368] Setting JSON to false
	I1227 09:57:23.855775  769388 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16795,"bootTime":1766812649,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:57:23.855839  769388 start.go:143] virtualization:  
	I1227 09:57:23.860623  769388 out.go:179] * [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:57:23.864301  769388 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:57:23.864369  769388 notify.go:221] Checking for updates...
	I1227 09:57:23.871858  769388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:57:23.879831  769388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:57:23.884111  769388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:57:23.887027  769388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:57:23.890016  769388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:57:23.893523  769388 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:23.893679  769388 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:57:23.942486  769388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:57:23.942607  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.033935  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-27 09:57:24.020858019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.034041  769388 docker.go:319] overlay module found
	I1227 09:57:24.037348  769388 out.go:179] * Using the docker driver based on user configuration
	I1227 09:57:24.040109  769388 start.go:309] selected driver: docker
	I1227 09:57:24.040131  769388 start.go:928] validating driver "docker" against <nil>
	I1227 09:57:24.040145  769388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:57:24.040848  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.119453  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-27 09:57:24.103606726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.119606  769388 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:57:24.119820  769388 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:57:24.124043  769388 out.go:179] * Using Docker driver with root privileges
	I1227 09:57:24.126916  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:24.126993  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:24.127014  769388 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 09:57:24.127097  769388 start.go:353] cluster config:
	{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:24.130340  769388 out.go:179] * Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
	I1227 09:57:24.133152  769388 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 09:57:24.136080  769388 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:57:24.140060  769388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:57:24.140141  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.140165  769388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1227 09:57:24.140177  769388 cache.go:65] Caching tarball of preloaded images
	I1227 09:57:24.140256  769388 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:57:24.140271  769388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 09:57:24.140383  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:24.140406  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json: {Name:mk4143ebcade308fb419077e3f8332f378dc7937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:24.161069  769388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:57:24.161091  769388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:57:24.161109  769388 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:57:24.161140  769388 start.go:360] acquireMachinesLock for force-systemd-flag-574701: {Name:mkf48a67b67df727c9d74e45482507e00be21327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:57:24.161254  769388 start.go:364] duration metric: took 93.536µs to acquireMachinesLock for "force-systemd-flag-574701"
	I1227 09:57:24.161290  769388 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 09:57:24.161353  769388 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:57:24.165884  769388 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:24.166208  769388 start.go:159] libmachine.API.Create for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:24.166249  769388 client.go:173] LocalClient.Create starting
	I1227 09:57:24.166322  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:24.166357  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166372  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166421  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:24.166486  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166501  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166999  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:24.184851  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:24.184931  769388 network_create.go:284] running [docker network inspect force-systemd-flag-574701] to gather additional debugging logs...
	I1227 09:57:24.184947  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701
	W1227 09:57:24.201338  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 returned with exit code 1
	I1227 09:57:24.201367  769388 network_create.go:287] error running [docker network inspect force-systemd-flag-574701]: docker network inspect force-systemd-flag-574701: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-574701 not found
	I1227 09:57:24.201381  769388 network_create.go:289] output of [docker network inspect force-systemd-flag-574701]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-574701 not found
	
	** /stderr **
	I1227 09:57:24.201475  769388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:24.231038  769388 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:24.231335  769388 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:24.231654  769388 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:24.232203  769388 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d880}
	I1227 09:57:24.232227  769388 network_create.go:124] attempt to create docker network force-systemd-flag-574701 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:57:24.232294  769388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-574701 force-systemd-flag-574701
	I1227 09:57:24.312633  769388 network_create.go:108] docker network force-systemd-flag-574701 192.168.76.0/24 created
	I1227 09:57:24.312662  769388 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-574701" container
	I1227 09:57:24.312733  769388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:24.330428  769388 cli_runner.go:164] Run: docker volume create force-systemd-flag-574701 --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:24.354470  769388 oci.go:103] Successfully created a docker volume force-systemd-flag-574701
	I1227 09:57:24.354571  769388 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-574701-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --entrypoint /usr/bin/test -v force-systemd-flag-574701:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:25.150777  769388 oci.go:107] Successfully prepared a docker volume force-systemd-flag-574701
	I1227 09:57:25.150847  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:25.150858  769388 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:25.150937  769388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:29.285806  769388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134820012s)
	I1227 09:57:29.285838  769388 kic.go:203] duration metric: took 4.134977669s to extract preloaded images to volume ...
	W1227 09:57:29.285987  769388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.286133  769388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.373204  769388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-574701 --name force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-574701 --network force-systemd-flag-574701 --ip 192.168.76.2 --volume force-systemd-flag-574701:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.767688  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Running}}
	I1227 09:57:29.794873  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:29.823050  769388 cli_runner.go:164] Run: docker exec force-systemd-flag-574701 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:29.890557  769388 oci.go:144] the created container "force-systemd-flag-574701" has a running status.
	I1227 09:57:29.890594  769388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa...
	I1227 09:57:30.464624  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.464726  769388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.506648  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.563495  769388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.563516  769388 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-574701 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.675307  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.705027  769388 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.705109  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:30.748542  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.748883  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:30.748899  769388 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.749537  769388 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:57:33.902589  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:33.902611  769388 ubuntu.go:182] provisioning hostname "force-systemd-flag-574701"
	I1227 09:57:33.902682  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:33.920165  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.920469  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:33.920480  769388 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-574701 && echo "force-systemd-flag-574701" | sudo tee /etc/hostname
	I1227 09:57:34.085277  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:34.085356  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.102383  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.102698  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.102716  769388 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-574701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-574701/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-574701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.255031  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.255059  769388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.255083  769388 ubuntu.go:190] setting up certificates
	I1227 09:57:34.255093  769388 provision.go:84] configureAuth start
	I1227 09:57:34.255175  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:34.271814  769388 provision.go:143] copyHostCerts
	I1227 09:57:34.271855  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271887  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.271900  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271973  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.272067  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272089  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.272097  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272126  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.272178  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272198  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.272205  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272232  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.272293  769388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-574701 san=[127.0.0.1 192.168.76.2 force-systemd-flag-574701 localhost minikube]
	I1227 09:57:34.545510  769388 provision.go:177] copyRemoteCerts
	I1227 09:57:34.545576  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.545630  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.562287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:34.663483  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.663552  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.681829  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.681902  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.701079  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.701139  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.722250  769388 provision.go:87] duration metric: took 467.13373ms to configureAuth
	I1227 09:57:34.722280  769388 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.722503  769388 config.go:182] Loaded profile config "force-systemd-flag-574701": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.722587  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.748482  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.748825  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.748842  769388 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.911917  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.911937  769388 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.912090  769388 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.912153  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.931590  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.931909  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.931998  769388 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.094955  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.095071  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:35.115477  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.115820  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:35.115843  769388 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.313708  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.088526773 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.313732  769388 machine.go:97] duration metric: took 5.608683566s to provisionDockerMachine
	I1227 09:57:36.313745  769388 client.go:176] duration metric: took 12.147489846s to LocalClient.Create
	I1227 09:57:36.313757  769388 start.go:167] duration metric: took 12.14755212s to libmachine.API.Create "force-systemd-flag-574701"
	I1227 09:57:36.313768  769388 start.go:293] postStartSetup for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:36.313777  769388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.313843  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.313894  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.333968  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.436051  769388 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.439811  769388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.439837  769388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.439848  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.439901  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.439994  769388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.440010  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.440117  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.449353  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.472877  769388 start.go:296] duration metric: took 159.095049ms for postStartSetup
	I1227 09:57:36.473245  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.490073  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:36.490364  769388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.490419  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.508708  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.616568  769388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.622218  769388 start.go:128] duration metric: took 12.460850316s to createHost
	I1227 09:57:36.622246  769388 start.go:83] releasing machines lock for "force-systemd-flag-574701", held for 12.460980323s
	I1227 09:57:36.622323  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.641788  769388 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.641849  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.642098  769388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.642163  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.664287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.672747  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.780184  769388 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.880930  769388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.887011  769388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.887080  769388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:36.924112  769388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:36.924139  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:36.924152  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:36.924252  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:36.946873  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:36.956487  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:36.966480  769388 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:36.966545  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:36.977403  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:36.987483  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:36.998514  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.010694  769388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.022875  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.036011  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.044803  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.054260  769388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.063604  769388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.071796  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.216587  769388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.323467  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.323492  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.323546  769388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.352336  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.365635  769388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.402353  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.420004  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.441069  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.461000  769388 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.468781  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.477924  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.502109  769388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.672967  769388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.840323  769388 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.840416  769388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:37.872525  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:37.886221  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:38.039548  769388 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.563380  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.577307  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.592258  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.608999  769388 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.783640  769388 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:38.955435  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.116493  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.131867  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.146438  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.292670  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.371970  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.392203  769388 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.392325  769388 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.396824  769388 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.396962  769388 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.400890  769388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.425825  769388 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.425938  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.452940  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.487385  769388 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.487511  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.509398  769388 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.513521  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.525777  769388 kubeadm.go:884] updating cluster {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.525889  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.525945  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.550774  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.550799  769388 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.550866  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.574219  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.574242  769388 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.574252  769388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.574354  769388 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-574701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.574415  769388 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.642105  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.642130  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.642146  769388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.642167  769388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-574701 NodeName:force-systemd-flag-574701 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.642292  769388 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-574701"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.642363  769388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.651846  769388 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.651910  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.661240  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1227 09:57:39.677750  769388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.692714  769388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1227 09:57:39.705586  769388 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.709624  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.719304  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.872388  769388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:39.905933  769388 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701 for IP: 192.168.76.2
	I1227 09:57:39.905958  769388 certs.go:195] generating shared ca certs ...
	I1227 09:57:39.905975  769388 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:39.906194  769388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:39.906270  769388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:39.906284  769388 certs.go:257] generating profile certs ...
	I1227 09:57:39.906359  769388 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key
	I1227 09:57:39.906376  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt with IP's: []
	I1227 09:57:40.185176  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt ...
	I1227 09:57:40.185209  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt: {Name:mkd8df8f694ab6bd0be298ca10765d50a0840ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185510  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key ...
	I1227 09:57:40.185530  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key: {Name:mkedfb2c92eeb1c8634de35cfef29ff1eb8c71f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185683  769388 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a
	I1227 09:57:40.185706  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:57:40.780814  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a ...
	I1227 09:57:40.780832  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a: {Name:mk220ae28824c87aa5d8ba64a794d883980a39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780959  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a ...
	I1227 09:57:40.780966  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a: {Name:mkac97d48f25e58d566aafd93cbcf157b2cb0117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.781034  769388 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt
	I1227 09:57:40.781140  769388 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key
	I1227 09:57:40.781206  769388 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key
	I1227 09:57:40.781219  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt with IP's: []
	I1227 09:57:40.864310  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt ...
	I1227 09:57:40.864342  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt: {Name:mk5dc7c59c3dfc68c7c8e2186f25c0bda8c48900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864549  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key ...
	I1227 09:57:40.864569  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key: {Name:mk7098be4d9c15bf1f3c8453e90bcc9388cdc9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864678  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.864715  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.864736  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.864755  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.864768  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.864796  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.864821  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.864837  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.864913  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.864990  769388 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.865007  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.865038  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.865102  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.865134  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.865199  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.865244  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.865267  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.865282  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.865799  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.898569  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.927873  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.948313  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.969255  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.989875  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:57:41.010787  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:41.031724  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:41.051433  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.077779  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.108786  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.133210  769388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.147828  769388 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.154460  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.161904  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.169300  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173499  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173602  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.219730  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.227914  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.234863  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.242037  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.252122  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256231  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256330  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.303396  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.311657  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.319645  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.327015  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.334332  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338256  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338360  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.382878  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.390786  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.399024  769388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.403779  769388 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.403832  769388 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.403946  769388 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.429145  769388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.439644  769388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.448769  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.448834  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.460465  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.460481  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.460550  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.471042  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.471103  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.480178  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.490398  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.490464  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.499105  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.510257  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.510321  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.520923  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.534256  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.534333  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.542461  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.646824  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.648335  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.753889  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.754015  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.754079  769388 kubeadm.go:319] OS: Linux
	I1227 09:57:41.754162  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.754242  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.754318  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.754400  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.754479  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.754553  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.754656  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.754726  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.754805  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.836243  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.836443  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.836586  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.855494  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:41.860963  769388 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.861090  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.861187  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.027134  769388 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.183308  769388 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.275495  769388 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.538151  769388 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:42.689457  769388 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:42.690078  769388 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.729913  769388 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:42.730516  769388 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.981667  769388 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:43.099131  769388 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:43.810479  769388 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:43.811011  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:44.109743  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:44.315485  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:44.540089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:44.694926  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:45.077270  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:45.080386  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:45.089864  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:45.093574  769388 out.go:252]   - Booting up control plane ...
	I1227 09:57:45.095563  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:45.097773  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:45.099785  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:45.145757  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:45.145889  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:45.157698  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:45.158555  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:45.158619  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:45.405440  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:45.405562  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:45.399682  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001476405s
	I1227 10:01:45.399725  769388 kubeadm.go:319] 
	I1227 10:01:45.399789  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:45.399827  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:45.399942  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:45.399950  769388 kubeadm.go:319] 
	I1227 10:01:45.400064  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:45.400098  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:45.400133  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:45.400138  769388 kubeadm.go:319] 
	I1227 10:01:45.404789  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:45.405218  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:45.405332  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:45.405567  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:45.405577  769388 kubeadm.go:319] 
	I1227 10:01:45.405646  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:45.405800  769388 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001476405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001476405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:45.405885  769388 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:45.831088  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:45.845534  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:45.845599  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:45.853400  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:45.853418  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:45.853490  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:45.862159  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:45.862225  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:45.869960  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:45.877918  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:45.877988  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:45.885657  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.893024  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:45.893088  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.900643  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:45.908132  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:45.908198  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:45.915813  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:45.955846  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:45.955910  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:46.044287  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:46.044366  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:46.044408  769388 kubeadm.go:319] OS: Linux
	I1227 10:01:46.044460  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:46.044514  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:46.044563  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:46.044621  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:46.044672  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:46.044726  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:46.044780  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:46.044831  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:46.044883  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:46.122322  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:46.122522  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:46.122662  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:46.135379  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:46.139129  769388 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:46.139327  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:46.139450  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:46.139598  769388 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:46.139674  769388 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:46.139756  769388 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:46.139815  769388 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:46.139883  769388 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:46.139949  769388 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:46.140059  769388 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:46.140138  769388 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:46.140469  769388 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:46.140529  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:46.278774  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:46.467106  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:46.674089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:46.962090  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:47.089511  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:47.090121  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:47.094363  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:47.097843  769388 out.go:252]   - Booting up control plane ...
	I1227 10:01:47.097949  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:47.099592  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:47.099673  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:47.133940  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:47.134045  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:47.147908  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:47.148976  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:47.149327  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:47.321604  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:47.321718  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:05:47.321648  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305874s
	I1227 10:05:47.321690  769388 kubeadm.go:319] 
	I1227 10:05:47.321762  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:05:47.321802  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:05:47.321944  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:05:47.321958  769388 kubeadm.go:319] 
	I1227 10:05:47.322066  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:05:47.322103  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:05:47.322153  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:05:47.322165  769388 kubeadm.go:319] 
	I1227 10:05:47.325886  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:47.326310  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:47.326424  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:47.326663  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:05:47.326673  769388 kubeadm.go:319] 
	I1227 10:05:47.326742  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:05:47.326828  769388 kubeadm.go:403] duration metric: took 8m5.922999378s to StartCluster
	I1227 10:05:47.326868  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:05:47.326939  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:05:47.362142  769388 cri.go:96] found id: ""
	I1227 10:05:47.362184  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.362193  769388 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:05:47.362200  769388 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:05:47.362260  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:05:47.386992  769388 cri.go:96] found id: ""
	I1227 10:05:47.387017  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.387026  769388 logs.go:284] No container was found matching "etcd"
	I1227 10:05:47.387033  769388 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:05:47.387095  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:05:47.412506  769388 cri.go:96] found id: ""
	I1227 10:05:47.412532  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.412541  769388 logs.go:284] No container was found matching "coredns"
	I1227 10:05:47.412549  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:05:47.412607  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:05:47.440415  769388 cri.go:96] found id: ""
	I1227 10:05:47.440440  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.440449  769388 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:05:47.440456  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:05:47.440515  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:05:47.465494  769388 cri.go:96] found id: ""
	I1227 10:05:47.465522  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.465530  769388 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:05:47.465538  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:05:47.465601  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:05:47.494595  769388 cri.go:96] found id: ""
	I1227 10:05:47.494628  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.494638  769388 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:05:47.494645  769388 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:05:47.494716  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:05:47.523703  769388 cri.go:96] found id: ""
	I1227 10:05:47.523728  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.523736  769388 logs.go:284] No container was found matching "kindnet"
	I1227 10:05:47.523746  769388 logs.go:123] Gathering logs for Docker ...
	I1227 10:05:47.523757  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 10:05:47.546298  769388 logs.go:123] Gathering logs for container status ...
	I1227 10:05:47.546329  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:05:47.584884  769388 logs.go:123] Gathering logs for kubelet ...
	I1227 10:05:47.584959  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:05:47.653574  769388 logs.go:123] Gathering logs for dmesg ...
	I1227 10:05:47.653612  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:05:47.671978  769388 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:05:47.672006  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:05:47.737784  769388 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:05:47.737860  769388 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:05:47.737902  769388 out.go:285] * 
	* 
	W1227 10:05:47.737955  769388 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.737974  769388 out.go:285] * 
	* 
	W1227 10:05:47.738225  769388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:05:47.743845  769388 out.go:203] 
	W1227 10:05:47.746703  769388 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.746744  769388 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:05:47.746767  769388 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:05:47.749808  769388 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-574701 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-27 10:05:48.201467771 +0000 UTC m=+2830.367898401
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-574701
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-574701:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361",
	        "Created": "2025-12-27T09:57:29.403390828Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 769964,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:57:29.482246999Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/hostname",
	        "HostsPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/hosts",
	        "LogPath": "/var/lib/docker/containers/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361/acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361-json.log",
	        "Name": "/force-systemd-flag-574701",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-574701:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-574701",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "acba4de42c5d17cf7f8ba296b01cee507f6a5ef923e4a48391c4d92ba7508361",
	                "LowerDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec-init/diff:/var/lib/docker/overlay2/9b533b4deb9c1d535741c7522fe23eacc0fb251795d87993eb74f4ff9ff9e74e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d0acbf588c6554f2130bb6fb2bd30e909f03b1b39b51e989956a9e2920bc2ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-574701",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-574701/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-574701",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-574701",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-574701",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8d87e9a2d21908ab80189916324051f8bc9d66c1dfafe0c47016cc5e1cb3446a",
	            "SandboxKey": "/var/run/docker/netns/8d87e9a2d219",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33723"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33724"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33727"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33725"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33726"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-574701": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:03:d3:06:6a:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5bc7f6c9a07c2381c87e9dc3b31039111859cad13b96f3981123438fcc35f62",
	                    "EndpointID": "5b85b8045fe8a53821c4c468f5cf0eaeb629c07482c241bb95efc2716476875a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-574701",
	                        "acba4de42c5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-574701 -n force-systemd-flag-574701
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-574701 -n force-systemd-flag-574701: exit status 6 (311.524486ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:05:48.516963  781664 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-574701" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-574701 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-334346 sudo cat /etc/kubernetes/kubelet.conf                                                                        │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /var/lib/kubelet/config.yaml                                                                        │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ delete  │ -p offline-docker-663445                                                                                                      │ offline-docker-663445     │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
	│ ssh     │ -p cilium-334346 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat docker --no-pager                                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/docker/daemon.json                                                                             │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo docker system info                                                                                      │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cri-dockerd --version                                                                                   │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat containerd --no-pager                                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/containerd/config.toml                                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo containerd config dump                                                                                  │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat crio --no-pager                                                                           │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo crio config                                                                                             │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ delete  │ -p cilium-334346                                                                                                              │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
	│ start   │ -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-159617  │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ start   │ -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ force-systemd-flag-574701 ssh docker info --format {{.CgroupDriver}}                                                          │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:57:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:57:23.854045  769388 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:57:23.854214  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854225  769388 out.go:374] Setting ErrFile to fd 2...
	I1227 09:57:23.854241  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854500  769388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:57:23.854935  769388 out.go:368] Setting JSON to false
	I1227 09:57:23.855775  769388 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16795,"bootTime":1766812649,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:57:23.855839  769388 start.go:143] virtualization:  
	I1227 09:57:23.860623  769388 out.go:179] * [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:57:23.864301  769388 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:57:23.864369  769388 notify.go:221] Checking for updates...
	I1227 09:57:23.871858  769388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:57:23.879831  769388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:57:23.884111  769388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:57:23.887027  769388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:57:23.890016  769388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:57:23.893523  769388 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:23.893679  769388 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:57:23.942486  769388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:57:23.942607  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.033935  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-27 09:57:24.020858019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.034041  769388 docker.go:319] overlay module found
	I1227 09:57:24.037348  769388 out.go:179] * Using the docker driver based on user configuration
	I1227 09:57:24.040109  769388 start.go:309] selected driver: docker
	I1227 09:57:24.040131  769388 start.go:928] validating driver "docker" against <nil>
	I1227 09:57:24.040145  769388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:57:24.040848  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.119453  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-27 09:57:24.103606726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.119606  769388 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:57:24.119820  769388 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:57:24.124043  769388 out.go:179] * Using Docker driver with root privileges
	I1227 09:57:24.126916  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:24.126993  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:24.127014  769388 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 09:57:24.127097  769388 start.go:353] cluster config:
	{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:24.130340  769388 out.go:179] * Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
	I1227 09:57:24.133152  769388 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 09:57:24.136080  769388 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:57:24.140060  769388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:57:24.140141  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.140165  769388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1227 09:57:24.140177  769388 cache.go:65] Caching tarball of preloaded images
	I1227 09:57:24.140256  769388 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:57:24.140271  769388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 09:57:24.140383  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:24.140406  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json: {Name:mk4143ebcade308fb419077e3f8332f378dc7937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:24.161069  769388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:57:24.161091  769388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:57:24.161109  769388 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:57:24.161140  769388 start.go:360] acquireMachinesLock for force-systemd-flag-574701: {Name:mkf48a67b67df727c9d74e45482507e00be21327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:57:24.161254  769388 start.go:364] duration metric: took 93.536µs to acquireMachinesLock for "force-systemd-flag-574701"
	I1227 09:57:24.161290  769388 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 09:57:24.161353  769388 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:57:23.421132  769090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:23.421440  769090 start.go:159] libmachine.API.Create for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:23.421474  769090 client.go:173] LocalClient.Create starting
	I1227 09:57:23.421564  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:23.421635  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421681  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.421760  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:23.421803  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421839  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.422293  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:23.444615  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:23.444701  769090 network_create.go:284] running [docker network inspect force-systemd-env-159617] to gather additional debugging logs...
	I1227 09:57:23.444722  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617
	W1227 09:57:23.469730  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 returned with exit code 1
	I1227 09:57:23.469759  769090 network_create.go:287] error running [docker network inspect force-systemd-env-159617]: docker network inspect force-systemd-env-159617: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-159617 not found
	I1227 09:57:23.469771  769090 network_create.go:289] output of [docker network inspect force-systemd-env-159617]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-159617 not found
	
	** /stderr **
	I1227 09:57:23.469879  769090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:23.484995  769090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:23.485264  769090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:23.485535  769090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:23.485842  769090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74a76dba2194 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:01:b7:05:f7:b5} reservation:<nil>}
	I1227 09:57:23.486201  769090 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47360}
	I1227 09:57:23.486220  769090 network_create.go:124] attempt to create docker network force-systemd-env-159617 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:57:23.486272  769090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159617 force-systemd-env-159617
	I1227 09:57:23.588843  769090 network_create.go:108] docker network force-systemd-env-159617 192.168.85.0/24 created
	I1227 09:57:23.588880  769090 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159617" container
	I1227 09:57:23.588951  769090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:23.607164  769090 cli_runner.go:164] Run: docker volume create force-systemd-env-159617 --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:23.627044  769090 oci.go:103] Successfully created a docker volume force-systemd-env-159617
	I1227 09:57:23.627271  769090 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159617-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --entrypoint /usr/bin/test -v force-systemd-env-159617:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:24.208049  769090 oci.go:107] Successfully prepared a docker volume force-systemd-env-159617
	I1227 09:57:24.208115  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.208125  769090 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:24.208197  769090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:24.165884  769388 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:24.166208  769388 start.go:159] libmachine.API.Create for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:24.166249  769388 client.go:173] LocalClient.Create starting
	I1227 09:57:24.166322  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:24.166357  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166372  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166421  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:24.166486  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166501  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166999  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:24.184851  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:24.184931  769388 network_create.go:284] running [docker network inspect force-systemd-flag-574701] to gather additional debugging logs...
	I1227 09:57:24.184947  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701
	W1227 09:57:24.201338  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 returned with exit code 1
	I1227 09:57:24.201367  769388 network_create.go:287] error running [docker network inspect force-systemd-flag-574701]: docker network inspect force-systemd-flag-574701: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-574701 not found
	I1227 09:57:24.201381  769388 network_create.go:289] output of [docker network inspect force-systemd-flag-574701]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-574701 not found
	
	** /stderr **
	I1227 09:57:24.201475  769388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:24.231038  769388 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:24.231335  769388 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:24.231654  769388 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:24.232203  769388 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d880}
	I1227 09:57:24.232227  769388 network_create.go:124] attempt to create docker network force-systemd-flag-574701 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:57:24.232294  769388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-574701 force-systemd-flag-574701
	I1227 09:57:24.312633  769388 network_create.go:108] docker network force-systemd-flag-574701 192.168.76.0/24 created
	I1227 09:57:24.312662  769388 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-574701" container
	I1227 09:57:24.312733  769388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:24.330428  769388 cli_runner.go:164] Run: docker volume create force-systemd-flag-574701 --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:24.354470  769388 oci.go:103] Successfully created a docker volume force-systemd-flag-574701
	I1227 09:57:24.354571  769388 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-574701-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --entrypoint /usr/bin/test -v force-systemd-flag-574701:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:25.150777  769388 oci.go:107] Successfully prepared a docker volume force-systemd-flag-574701
	I1227 09:57:25.150847  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:25.150858  769388 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:25.150937  769388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:29.290594  769090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (5.082338598s)
	I1227 09:57:29.290643  769090 kic.go:203] duration metric: took 5.082509768s to extract preloaded images to volume ...
	W1227 09:57:29.290794  769090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.290951  769090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.395948  769090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159617 --name force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159617 --network force-systemd-env-159617 --ip 192.168.85.2 --volume force-systemd-env-159617:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.916266  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Running}}
	I1227 09:57:29.946688  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:29.995989  769090 cli_runner.go:164] Run: docker exec force-systemd-env-159617 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:30.096142  769090 oci.go:144] the created container "force-systemd-env-159617" has a running status.
	I1227 09:57:30.096178  769090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa...
	I1227 09:57:30.500317  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.500877  769090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.556340  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.597973  769090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.597993  769090 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159617 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.707985  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.755347  769090 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.755426  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:30.787678  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.788014  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:30.788023  769090 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.789480  769090 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40286->127.0.0.1:33728: read: connection reset by peer
	I1227 09:57:29.285806  769388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134820012s)
	I1227 09:57:29.285838  769388 kic.go:203] duration metric: took 4.134977669s to extract preloaded images to volume ...
	W1227 09:57:29.285987  769388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.286133  769388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.373204  769388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-574701 --name force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-574701 --network force-systemd-flag-574701 --ip 192.168.76.2 --volume force-systemd-flag-574701:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.767688  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Running}}
	I1227 09:57:29.794873  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:29.823050  769388 cli_runner.go:164] Run: docker exec force-systemd-flag-574701 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:29.890557  769388 oci.go:144] the created container "force-systemd-flag-574701" has a running status.
	I1227 09:57:29.890594  769388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa...
	I1227 09:57:30.464624  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.464726  769388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.506648  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.563495  769388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.563516  769388 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-574701 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.675307  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.705027  769388 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.705109  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:30.748542  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.748883  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:30.748899  769388 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.749537  769388 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:57:33.935423  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:33.935449  769090 ubuntu.go:182] provisioning hostname "force-systemd-env-159617"
	I1227 09:57:33.935561  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:33.958892  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.959223  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:33.959235  769090 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-159617 && echo "force-systemd-env-159617" | sudo tee /etc/hostname
	I1227 09:57:34.119941  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:34.120013  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.142778  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.143089  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.143106  769090 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-159617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159617/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-159617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.287061  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.287083  769090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.287101  769090 ubuntu.go:190] setting up certificates
	I1227 09:57:34.287154  769090 provision.go:84] configureAuth start
	I1227 09:57:34.287222  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:34.331489  769090 provision.go:143] copyHostCerts
	I1227 09:57:34.331534  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331572  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.331590  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331648  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.331728  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331749  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.331757  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331779  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.331821  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331841  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.331846  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331869  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.331917  769090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159617 san=[127.0.0.1 192.168.85.2 force-systemd-env-159617 localhost minikube]
	I1227 09:57:34.598391  769090 provision.go:177] copyRemoteCerts
	I1227 09:57:34.598509  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.598589  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.616730  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:34.716531  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.716639  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.746980  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.747057  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.766043  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.766100  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.785469  769090 provision.go:87] duration metric: took 498.291074ms to configureAuth
	I1227 09:57:34.785494  769090 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.785662  769090 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.785721  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.802871  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.803337  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.803351  769090 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.967701  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.967720  769090 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.967841  769090 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.967907  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.988654  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.988961  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.989046  769090 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.153832  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.153922  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:35.181379  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.181695  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:35.181712  769090 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.406595  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.148525118 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.406630  769090 machine.go:97] duration metric: took 5.651265169s to provisionDockerMachine
	I1227 09:57:36.406643  769090 client.go:176] duration metric: took 12.985158917s to LocalClient.Create
	I1227 09:57:36.406661  769090 start.go:167] duration metric: took 12.98522367s to libmachine.API.Create "force-systemd-env-159617"
	I1227 09:57:36.406668  769090 start.go:293] postStartSetup for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:36.406681  769090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.406740  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.406784  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.424421  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.529164  769090 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.534359  769090 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.534393  769090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.534406  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.534457  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.534546  769090 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.534559  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.534656  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.545176  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.564519  769090 start.go:296] duration metric: took 157.818194ms for postStartSetup
	I1227 09:57:36.564872  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.582964  769090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json ...
	I1227 09:57:36.583262  769090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.583316  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.603598  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.705489  769090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.712003  769090 start.go:128] duration metric: took 13.295769122s to createHost
	I1227 09:57:36.712030  769090 start.go:83] releasing machines lock for "force-systemd-env-159617", held for 13.295895493s
	I1227 09:57:36.712104  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.735458  769090 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.735509  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.735527  769090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.735606  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.763793  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.767335  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.874762  769090 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.974322  769090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.981372  769090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.981442  769090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:37.027684  769090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:37.027787  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.027825  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.028014  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.048308  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:37.060423  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:37.072092  769090 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:37.072150  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:37.082000  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.091287  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:37.099834  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.120427  769090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.128839  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.139785  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.156006  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.167227  769090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.176858  769090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.188913  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.345099  769090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.452805  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.452846  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.452907  769090 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.474525  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.495905  769090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.546927  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.567236  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.591088  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.608681  769090 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.613473  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.622987  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.639261  769090 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.803450  769090 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.985157  769090 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.985302  769090 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:38.001357  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:38.018865  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:33.902589  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:33.902611  769388 ubuntu.go:182] provisioning hostname "force-systemd-flag-574701"
	I1227 09:57:33.902682  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:33.920165  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.920469  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:33.920480  769388 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-574701 && echo "force-systemd-flag-574701" | sudo tee /etc/hostname
	I1227 09:57:34.085277  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:34.085356  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.102383  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.102698  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.102716  769388 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-574701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-574701/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-574701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.255031  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.255059  769388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.255083  769388 ubuntu.go:190] setting up certificates
	I1227 09:57:34.255093  769388 provision.go:84] configureAuth start
	I1227 09:57:34.255175  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:34.271814  769388 provision.go:143] copyHostCerts
	I1227 09:57:34.271855  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271887  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.271900  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271973  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.272067  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272089  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.272097  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272126  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.272178  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272198  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.272205  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272232  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.272293  769388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-574701 san=[127.0.0.1 192.168.76.2 force-systemd-flag-574701 localhost minikube]
	I1227 09:57:34.545510  769388 provision.go:177] copyRemoteCerts
	I1227 09:57:34.545576  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.545630  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.562287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:34.663483  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.663552  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.681829  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.681902  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.701079  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.701139  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.722250  769388 provision.go:87] duration metric: took 467.13373ms to configureAuth
	I1227 09:57:34.722280  769388 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.722503  769388 config.go:182] Loaded profile config "force-systemd-flag-574701": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.722587  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.748482  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.748825  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.748842  769388 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.911917  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.911937  769388 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.912090  769388 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.912153  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.931590  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.931909  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.931998  769388 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.094955  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.095071  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:35.115477  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.115820  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:35.115843  769388 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.313708  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.088526773 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.313732  769388 machine.go:97] duration metric: took 5.608683566s to provisionDockerMachine
	I1227 09:57:36.313745  769388 client.go:176] duration metric: took 12.147489846s to LocalClient.Create
	I1227 09:57:36.313757  769388 start.go:167] duration metric: took 12.14755212s to libmachine.API.Create "force-systemd-flag-574701"
	I1227 09:57:36.313768  769388 start.go:293] postStartSetup for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:36.313777  769388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.313843  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.313894  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.333968  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.436051  769388 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.439811  769388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.439837  769388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.439848  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.439901  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.439994  769388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.440010  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.440117  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.449353  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.472877  769388 start.go:296] duration metric: took 159.095049ms for postStartSetup
	I1227 09:57:36.473245  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.490073  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:36.490364  769388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.490419  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.508708  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.616568  769388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.622218  769388 start.go:128] duration metric: took 12.460850316s to createHost
	I1227 09:57:36.622246  769388 start.go:83] releasing machines lock for "force-systemd-flag-574701", held for 12.460980323s
	I1227 09:57:36.622323  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.641788  769388 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.641849  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.642098  769388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.642163  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.664287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.672747  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.780184  769388 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.880930  769388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.887011  769388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.887080  769388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:36.924112  769388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:36.924139  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:36.924152  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:36.924252  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:36.946873  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:36.956487  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:36.966480  769388 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:36.966545  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:36.977403  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:36.987483  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:36.998514  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.010694  769388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.022875  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.036011  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.044803  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.054260  769388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.063604  769388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.071796  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.216587  769388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.323467  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.323492  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.323546  769388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.352336  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.365635  769388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.402353  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.420004  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.441069  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.461000  769388 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.468781  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.477924  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.502109  769388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.672967  769388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.840323  769388 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.840416  769388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:37.872525  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:37.886221  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:38.039548  769388 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.563380  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.577307  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.592258  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.608999  769388 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.783640  769388 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:38.955435  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.116493  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.131867  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.146438  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.292670  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.371970  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.392203  769388 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.392325  769388 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.396824  769388 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.396962  769388 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.400890  769388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.425825  769388 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.425938  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.452940  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:38.182967  769090 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.643595  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.659567  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.676415  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.693157  769090 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.864384  769090 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:39.021630  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.162919  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.195686  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.211669  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.365125  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.465622  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.482004  769090 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.482130  769090 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.486220  769090 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.486340  769090 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.491356  769090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.522612  769090 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.522673  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.553580  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.589853  769090 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.589955  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.609607  769090 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.613910  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.623309  769090 kubeadm.go:884] updating cluster {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.623458  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.623516  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.644906  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.644931  769090 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.644988  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.664959  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.664988  769090 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.664998  769090 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.665088  769090 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-159617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.665158  769090 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.747517  769090 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.747540  769090 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.747563  769090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.747608  769090 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159617 NodeName:force-systemd-env-159617 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.747762  769090 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-159617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.747834  769090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.760575  769090 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.760648  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.775516  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1227 09:57:39.797752  769090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.810219  769090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 09:57:39.828590  769090 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.832469  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.842381  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:40.061511  769090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:40.082736  769090 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617 for IP: 192.168.85.2
	I1227 09:57:40.082833  769090 certs.go:195] generating shared ca certs ...
	I1227 09:57:40.082870  769090 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.083102  769090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:40.083211  769090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:40.083245  769090 certs.go:257] generating profile certs ...
	I1227 09:57:40.083338  769090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key
	I1227 09:57:40.083381  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt with IP's: []
	I1227 09:57:40.290500  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt ...
	I1227 09:57:40.290601  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt: {Name:mkdef657d92ac442b8ca8d24bafb061317e911bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.290877  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key ...
	I1227 09:57:40.290927  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key: {Name:mkd98e7a2fa2573ec393c9c33ed2af8ef854cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.291097  769090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17
	I1227 09:57:40.291156  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:57:40.441193  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 ...
	I1227 09:57:40.441292  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17: {Name:mka639a3de484b92be9c260344df9e8bdedff2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441538  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 ...
	I1227 09:57:40.441579  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17: {Name:mkdfe6ab9be254d46412de6c107cb553d654d1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441720  769090 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt
	I1227 09:57:40.441858  769090 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key
	I1227 09:57:40.441988  769090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key
	I1227 09:57:40.442045  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt with IP's: []
	I1227 09:57:40.780289  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt ...
	I1227 09:57:40.780323  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt: {Name:mk8f859572961556f4c1a1a4febed8df29d82f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780533  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key ...
	I1227 09:57:40.780542  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key: {Name:mk7056050a32483ae445b0ae07006f0562cf0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780640  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.780659  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.780678  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.780691  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.780705  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.780722  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.780742  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.780754  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.780817  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.780867  769090 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.780876  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.780908  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.780938  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.780966  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.781023  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.781067  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.781079  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.781090  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.781688  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.814042  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.838435  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.880890  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.906281  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.928048  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:57:40.950863  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:40.973554  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:40.993400  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.017107  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.037355  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.066525  769090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.095696  769090 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.107307  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.118732  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.132658  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138503  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138605  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.185800  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.193790  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.201492  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.208841  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.216427  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220469  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220555  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.265817  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.273569  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.281083  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.288616  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.296277  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300012  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300113  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.343100  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.351309  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.358883  769090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.362914  769090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.362973  769090 kubeadm.go:401] StartCluster: {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.363101  769090 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.381051  769090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.392106  769090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.400552  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.400659  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.412462  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.412533  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.412612  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.421832  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.421945  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.432909  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.443013  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.443076  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.451990  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.462018  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.462083  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.470161  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.479985  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.480066  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.488640  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.541967  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.544237  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.651990  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.652128  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.652184  769090 kubeadm.go:319] OS: Linux
	I1227 09:57:41.652254  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.652330  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.652403  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.652481  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.652557  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.652636  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.652713  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.652790  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.652862  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.748451  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.748635  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.748758  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.778942  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:39.487385  769388 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.487511  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.509398  769388 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.513521  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.525777  769388 kubeadm.go:884] updating cluster {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.525889  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.525945  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.550774  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.550799  769388 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.550866  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.574219  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.574242  769388 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.574252  769388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.574354  769388 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-574701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.574415  769388 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.642105  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.642130  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.642146  769388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.642167  769388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-574701 NodeName:force-systemd-flag-574701 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.642292  769388 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-574701"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.642363  769388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.651846  769388 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.651910  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.661240  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1227 09:57:39.677750  769388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.692714  769388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1227 09:57:39.705586  769388 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.709624  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.719304  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.872388  769388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:39.905933  769388 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701 for IP: 192.168.76.2
	I1227 09:57:39.905958  769388 certs.go:195] generating shared ca certs ...
	I1227 09:57:39.905975  769388 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:39.906194  769388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:39.906270  769388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:39.906284  769388 certs.go:257] generating profile certs ...
	I1227 09:57:39.906359  769388 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key
	I1227 09:57:39.906376  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt with IP's: []
	I1227 09:57:40.185176  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt ...
	I1227 09:57:40.185209  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt: {Name:mkd8df8f694ab6bd0be298ca10765d50a0840ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185510  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key ...
	I1227 09:57:40.185530  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key: {Name:mkedfb2c92eeb1c8634de35cfef29ff1eb8c71f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185683  769388 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a
	I1227 09:57:40.185706  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:57:40.780814  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a ...
	I1227 09:57:40.780832  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a: {Name:mk220ae28824c87aa5d8ba64a794d883980a39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780959  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a ...
	I1227 09:57:40.780966  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a: {Name:mkac97d48f25e58d566aafd93cbcf157b2cb0117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.781034  769388 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt
	I1227 09:57:40.781140  769388 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key
	I1227 09:57:40.781206  769388 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key
	I1227 09:57:40.781219  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt with IP's: []
	I1227 09:57:40.864310  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt ...
	I1227 09:57:40.864342  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt: {Name:mk5dc7c59c3dfc68c7c8e2186f25c0bda8c48900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864549  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key ...
	I1227 09:57:40.864569  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key: {Name:mk7098be4d9c15bf1f3c8453e90bcc9388cdc9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864678  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.864715  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.864736  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.864755  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.864768  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.864796  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.864821  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.864837  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.864913  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.864990  769388 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.865007  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.865038  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.865102  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.865134  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.865199  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.865244  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.865267  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.865282  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.865799  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.898569  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.927873  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.948313  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.969255  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.989875  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:57:41.010787  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:41.031724  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:41.051433  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.077779  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.108786  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.133210  769388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.147828  769388 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.154460  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.161904  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.169300  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173499  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173602  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.219730  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.227914  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.234863  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.242037  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.252122  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256231  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256330  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.303396  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.311657  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.319645  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.327015  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.334332  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338256  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338360  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.382878  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.390786  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.399024  769388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.403779  769388 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.403832  769388 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.403946  769388 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.429145  769388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.439644  769388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.448769  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.448834  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.460465  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.460481  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.460550  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.471042  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.471103  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.480178  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.490398  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.490464  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.499105  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.510257  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.510321  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.520923  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.534256  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.534333  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.542461  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.646824  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.648335  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.753889  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.754015  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.754079  769388 kubeadm.go:319] OS: Linux
	I1227 09:57:41.754162  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.754242  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.754318  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.754400  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.754479  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.754553  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.754656  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.754726  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.754805  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.836243  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.836443  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.836586  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.855494  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:41.785794  769090 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.785959  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.786069  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.111543  769090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.252770  769090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.503417  769090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.668993  769090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:43.021398  769090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:43.021831  769090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:41.860963  769388 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.861090  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.861187  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.027134  769388 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.183308  769388 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.275495  769388 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.538151  769388 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:42.689457  769388 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:42.690078  769388 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.729913  769388 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:42.730516  769388 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.981667  769388 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:43.099131  769388 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:43.810479  769388 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:43.811011  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:44.109743  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:44.315485  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:44.540089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:44.694926  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:45.077270  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:45.080386  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:45.089864  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:43.563328  769090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:43.564051  769090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:43.973250  769090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:44.693761  769090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:44.975792  769090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:44.976216  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:45.527516  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:45.744663  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:45.991918  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:46.189187  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:46.428467  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:46.429216  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:46.432110  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:46.435922  769090 out.go:252]   - Booting up control plane ...
	I1227 09:57:46.436040  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:46.436157  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:46.436262  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:46.453052  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:46.453445  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:46.460773  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:46.461104  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:46.461150  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:46.595002  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:46.595169  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:57:45.093574  769388 out.go:252]   - Booting up control plane ...
	I1227 09:57:45.095563  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:45.097773  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:45.099785  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:45.145757  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:45.145889  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:45.157698  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:45.158555  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:45.158619  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:45.405440  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:45.405562  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:45.399682  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001476405s
	I1227 10:01:45.399725  769388 kubeadm.go:319] 
	I1227 10:01:45.399789  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:45.399827  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:45.399942  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:45.399950  769388 kubeadm.go:319] 
	I1227 10:01:45.400064  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:45.400098  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:45.400133  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:45.400138  769388 kubeadm.go:319] 
	I1227 10:01:45.404789  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:45.405218  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:45.405332  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:45.405567  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:45.405577  769388 kubeadm.go:319] 
	I1227 10:01:45.405646  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:45.405800  769388 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001476405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:45.405885  769388 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:45.831088  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:45.845534  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:45.845599  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:45.853400  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:45.853418  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:45.853490  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:45.862159  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:45.862225  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:45.869960  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:45.877918  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:45.877988  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:45.885657  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.893024  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:45.893088  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.900643  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:45.908132  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:45.908198  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:45.915813  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:45.955846  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:45.955910  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:46.044287  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:46.044366  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:46.044408  769388 kubeadm.go:319] OS: Linux
	I1227 10:01:46.044460  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:46.044514  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:46.044563  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:46.044621  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:46.044672  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:46.044726  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:46.044780  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:46.044831  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:46.044883  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:46.122322  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:46.122522  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:46.122662  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:46.135379  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:46.139129  769388 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:46.139327  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:46.139450  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:46.139598  769388 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:46.139674  769388 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:46.139756  769388 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:46.139815  769388 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:46.139883  769388 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:46.139949  769388 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:46.140059  769388 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:46.140138  769388 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:46.140469  769388 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:46.140529  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:46.278774  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:46.467106  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:46.674089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:46.962090  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:47.089511  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:47.090121  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:47.094363  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:46.594891  769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000241154s
	I1227 10:01:46.594938  769090 kubeadm.go:319] 
	I1227 10:01:46.595000  769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:46.595036  769090 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:46.595163  769090 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:46.595173  769090 kubeadm.go:319] 
	I1227 10:01:46.595286  769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:46.595323  769090 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:46.595357  769090 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:46.595361  769090 kubeadm.go:319] 
	I1227 10:01:46.600352  769090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:46.600807  769090 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:46.600916  769090 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:46.601157  769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:46.601163  769090 kubeadm.go:319] 
	I1227 10:01:46.601232  769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:46.601345  769090 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000241154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:46.601418  769090 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:47.049789  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:47.065686  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:47.065751  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:47.078067  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:47.078144  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:47.078247  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:47.088920  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:47.089035  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:47.101290  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:47.111719  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:47.111783  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:47.119486  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.128720  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:47.128889  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.137979  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:47.146623  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:47.146781  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:47.155774  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:47.197997  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:47.198575  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:47.334679  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:47.334774  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:47.334814  769090 kubeadm.go:319] OS: Linux
	I1227 10:01:47.334877  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:47.334937  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:47.335000  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:47.335065  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:47.335164  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:47.335236  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:47.335294  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:47.335359  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:47.335418  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:47.413630  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:47.413746  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:47.413842  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:47.427809  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:47.431698  769090 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:47.431881  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:47.431951  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:47.432047  769090 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:47.432114  769090 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:47.432211  769090 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:47.432286  769090 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:47.432360  769090 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:47.432432  769090 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:47.432512  769090 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:47.432810  769090 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:47.433140  769090 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:47.433248  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:47.584725  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:47.986204  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:47.097843  769388 out.go:252]   - Booting up control plane ...
	I1227 10:01:47.097949  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:47.099592  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:47.099673  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:47.133940  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:47.134045  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:47.147908  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:47.148976  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:47.149327  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:47.321604  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:47.321718  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:48.231719  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:48.868258  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:49.097361  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:49.097857  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:49.100455  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:49.104347  769090 out.go:252]   - Booting up control plane ...
	I1227 10:01:49.104456  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:49.104539  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:49.105527  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:49.125548  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:49.125672  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:49.134446  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:49.134626  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:49.134694  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:49.262884  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:49.263010  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:05:47.321648  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305874s
	I1227 10:05:47.321690  769388 kubeadm.go:319] 
	I1227 10:05:47.321762  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:05:47.321802  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:05:47.321944  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:05:47.321958  769388 kubeadm.go:319] 
	I1227 10:05:47.322066  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:05:47.322103  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:05:47.322153  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:05:47.322165  769388 kubeadm.go:319] 
	I1227 10:05:47.325886  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:47.326310  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:47.326424  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:47.326663  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:05:47.326673  769388 kubeadm.go:319] 
	I1227 10:05:47.326742  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:05:47.326828  769388 kubeadm.go:403] duration metric: took 8m5.922999378s to StartCluster
	I1227 10:05:47.326868  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:05:47.326939  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:05:47.362142  769388 cri.go:96] found id: ""
	I1227 10:05:47.362184  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.362193  769388 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:05:47.362200  769388 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:05:47.362260  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:05:47.386992  769388 cri.go:96] found id: ""
	I1227 10:05:47.387017  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.387026  769388 logs.go:284] No container was found matching "etcd"
	I1227 10:05:47.387033  769388 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:05:47.387095  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:05:47.412506  769388 cri.go:96] found id: ""
	I1227 10:05:47.412532  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.412541  769388 logs.go:284] No container was found matching "coredns"
	I1227 10:05:47.412549  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:05:47.412607  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:05:47.440415  769388 cri.go:96] found id: ""
	I1227 10:05:47.440440  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.440449  769388 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:05:47.440456  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:05:47.440515  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:05:47.465494  769388 cri.go:96] found id: ""
	I1227 10:05:47.465522  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.465530  769388 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:05:47.465538  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:05:47.465601  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:05:47.494595  769388 cri.go:96] found id: ""
	I1227 10:05:47.494628  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.494638  769388 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:05:47.494645  769388 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:05:47.494716  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:05:47.523703  769388 cri.go:96] found id: ""
	I1227 10:05:47.523728  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.523736  769388 logs.go:284] No container was found matching "kindnet"
	I1227 10:05:47.523746  769388 logs.go:123] Gathering logs for Docker ...
	I1227 10:05:47.523757  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 10:05:47.546298  769388 logs.go:123] Gathering logs for container status ...
	I1227 10:05:47.546329  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:05:47.584884  769388 logs.go:123] Gathering logs for kubelet ...
	I1227 10:05:47.584959  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:05:47.653574  769388 logs.go:123] Gathering logs for dmesg ...
	I1227 10:05:47.653612  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:05:47.671978  769388 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:05:47.672006  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:05:47.737784  769388 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:05:47.737860  769388 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:05:47.737902  769388 out.go:285] * 
	W1227 10:05:47.737955  769388 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.737974  769388 out.go:285] * 
	W1227 10:05:47.738225  769388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:05:47.743845  769388 out.go:203] 
	W1227 10:05:47.746703  769388 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.746744  769388 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:05:47.746767  769388 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:05:47.749808  769388 out.go:203] 
	
	
	==> Docker <==
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.267209749Z" level=info msg="Restoring containers: start."
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.287569154Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.307561743Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.512818399Z" level=info msg="Loading containers: done."
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531516903Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531579162Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.531613729Z" level=info msg="Initializing buildkit"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.552331803Z" level=info msg="Completed buildkit initialization"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.558443093Z" level=info msg="Daemon has completed initialization"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.558651391Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 09:57:38 force-systemd-flag-574701 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.559486059Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 09:57:38 force-systemd-flag-574701 dockerd[1139]: time="2025-12-27T09:57:38.559558500Z" level=info msg="API listen on [::]:2376"
	Dec 27 09:57:39 force-systemd-flag-574701 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Setting cgroupDriver systemd"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 27 09:57:39 force-systemd-flag-574701 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 27 09:57:39 force-systemd-flag-574701 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:49.121495    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.122187    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.123651    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.124083    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.125486    5631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.131052] systemd-journald[229]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 08:52] overlayfs: idmapped layers are currently not supported
	[Dec27 08:53] overlayfs: idmapped layers are currently not supported
	[Dec27 08:55] overlayfs: idmapped layers are currently not supported
	[Dec27 08:56] overlayfs: idmapped layers are currently not supported
	[Dec27 09:02] overlayfs: idmapped layers are currently not supported
	[Dec27 09:03] overlayfs: idmapped layers are currently not supported
	[Dec27 09:04] overlayfs: idmapped layers are currently not supported
	[Dec27 09:05] overlayfs: idmapped layers are currently not supported
	[Dec27 09:06] overlayfs: idmapped layers are currently not supported
	[Dec27 09:08] overlayfs: idmapped layers are currently not supported
	[ +24.018537] overlayfs: idmapped layers are currently not supported
	[Dec27 09:09] overlayfs: idmapped layers are currently not supported
	[ +25.285275] overlayfs: idmapped layers are currently not supported
	[Dec27 09:10] overlayfs: idmapped layers are currently not supported
	[ +21.268238] systemd-journald[230]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 09:11] overlayfs: idmapped layers are currently not supported
	[  +4.417156] overlayfs: idmapped layers are currently not supported
	[ +35.863671] overlayfs: idmapped layers are currently not supported
	[Dec27 09:12] overlayfs: idmapped layers are currently not supported
	[Dec27 09:13] overlayfs: idmapped layers are currently not supported
	[Dec27 09:14] overlayfs: idmapped layers are currently not supported
	[ +22.811829] overlayfs: idmapped layers are currently not supported
	[Dec27 09:16] overlayfs: idmapped layers are currently not supported
	[Dec27 09:18] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:05:49 up  4:48,  0 user,  load average: 1.09, 0.98, 1.70
	Linux force-systemd-flag-574701 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:46 force-systemd-flag-574701 kubelet[5406]: E1227 10:05:46.846947    5406 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:46 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:47 force-systemd-flag-574701 kubelet[5479]: E1227 10:05:47.610804    5479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:47 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:48 force-systemd-flag-574701 kubelet[5534]: E1227 10:05:48.387256    5534 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:48 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:49 force-systemd-flag-574701 kubelet[5624]: E1227 10:05:49.107929    5624 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:49 force-systemd-flag-574701 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-574701 -n force-systemd-flag-574701
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-574701 -n force-systemd-flag-574701: exit status 6 (451.825774ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:05:49.827623  781915 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-574701" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-574701" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-574701" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-574701
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-574701: (2.09332833s)
--- FAIL: TestForceSystemdFlag (508.16s)

                                                
                                    
x
+
TestForceSystemdEnv (511.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m26.861781925s)

                                                
                                                
-- stdout --
	* [force-systemd-env-159617] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-159617" primary control-plane node in "force-systemd-env-159617" cluster
	* Pulling base image v0.0.48-1766570851-22316 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:57:23.095610  769090 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:57:23.095714  769090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.095719  769090 out.go:374] Setting ErrFile to fd 2...
	I1227 09:57:23.095724  769090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.095992  769090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:57:23.096382  769090 out.go:368] Setting JSON to false
	I1227 09:57:23.097113  769090 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16794,"bootTime":1766812649,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:57:23.097165  769090 start.go:143] virtualization:  
	I1227 09:57:23.100886  769090 out.go:179] * [force-systemd-env-159617] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:57:23.104813  769090 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:57:23.105040  769090 notify.go:221] Checking for updates...
	I1227 09:57:23.110969  769090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:57:23.116911  769090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:57:23.119852  769090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:57:23.122762  769090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:57:23.126220  769090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1227 09:57:23.129701  769090 config.go:182] Loaded profile config "offline-docker-663445": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:23.129800  769090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:57:23.172624  769090 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:57:23.172739  769090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:23.258813  769090 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-27 09:57:23.24655821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:23.258917  769090 docker.go:319] overlay module found
	I1227 09:57:23.263798  769090 out.go:179] * Using the docker driver based on user configuration
	I1227 09:57:23.266789  769090 start.go:309] selected driver: docker
	I1227 09:57:23.266809  769090 start.go:928] validating driver "docker" against <nil>
	I1227 09:57:23.266823  769090 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:57:23.267966  769090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:23.370956  769090 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:44 SystemTime:2025-12-27 09:57:23.360749972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:23.371129  769090 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:57:23.371351  769090 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:57:23.377001  769090 out.go:179] * Using Docker driver with root privileges
	I1227 09:57:23.380487  769090 cni.go:84] Creating CNI manager for ""
	I1227 09:57:23.380565  769090 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:23.380582  769090 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 09:57:23.380665  769090 start.go:353] cluster config:
	{Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:23.384230  769090 out.go:179] * Starting "force-systemd-env-159617" primary control-plane node in "force-systemd-env-159617" cluster
	I1227 09:57:23.387377  769090 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 09:57:23.391413  769090 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:57:23.394502  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:23.394565  769090 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1227 09:57:23.394580  769090 cache.go:65] Caching tarball of preloaded images
	I1227 09:57:23.394674  769090 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:57:23.394696  769090 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 09:57:23.394813  769090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json ...
	I1227 09:57:23.394837  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json: {Name:mk8651bda82b5dd38d893e76015a4d69009111d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:23.395003  769090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:57:23.415963  769090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:57:23.415983  769090 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:57:23.415998  769090 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:57:23.416029  769090 start.go:360] acquireMachinesLock for force-systemd-env-159617: {Name:mk1b798ab2cc55ad7be26d9552dd2e551cf406b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:57:23.416126  769090 start.go:364] duration metric: took 82.311µs to acquireMachinesLock for "force-systemd-env-159617"
	I1227 09:57:23.416150  769090 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 09:57:23.416219  769090 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:57:23.421132  769090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:23.421440  769090 start.go:159] libmachine.API.Create for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:23.421474  769090 client.go:173] LocalClient.Create starting
	I1227 09:57:23.421564  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:23.421635  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421681  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.421760  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:23.421803  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421839  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.422293  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:23.444615  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:23.444701  769090 network_create.go:284] running [docker network inspect force-systemd-env-159617] to gather additional debugging logs...
	I1227 09:57:23.444722  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617
	W1227 09:57:23.469730  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 returned with exit code 1
	I1227 09:57:23.469759  769090 network_create.go:287] error running [docker network inspect force-systemd-env-159617]: docker network inspect force-systemd-env-159617: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-159617 not found
	I1227 09:57:23.469771  769090 network_create.go:289] output of [docker network inspect force-systemd-env-159617]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-159617 not found
	
	** /stderr **
	I1227 09:57:23.469879  769090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:23.484995  769090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:23.485264  769090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:23.485535  769090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:23.485842  769090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74a76dba2194 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:01:b7:05:f7:b5} reservation:<nil>}
	I1227 09:57:23.486201  769090 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47360}
	I1227 09:57:23.486220  769090 network_create.go:124] attempt to create docker network force-systemd-env-159617 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:57:23.486272  769090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159617 force-systemd-env-159617
	I1227 09:57:23.588843  769090 network_create.go:108] docker network force-systemd-env-159617 192.168.85.0/24 created
	I1227 09:57:23.588880  769090 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159617" container
	I1227 09:57:23.588951  769090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:23.607164  769090 cli_runner.go:164] Run: docker volume create force-systemd-env-159617 --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:23.627044  769090 oci.go:103] Successfully created a docker volume force-systemd-env-159617
	I1227 09:57:23.627271  769090 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159617-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --entrypoint /usr/bin/test -v force-systemd-env-159617:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:24.208049  769090 oci.go:107] Successfully prepared a docker volume force-systemd-env-159617
	I1227 09:57:24.208115  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.208125  769090 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:24.208197  769090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:29.290594  769090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (5.082338598s)
	I1227 09:57:29.290643  769090 kic.go:203] duration metric: took 5.082509768s to extract preloaded images to volume ...
	W1227 09:57:29.290794  769090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.290951  769090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.395948  769090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159617 --name force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159617 --network force-systemd-env-159617 --ip 192.168.85.2 --volume force-systemd-env-159617:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.916266  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Running}}
	I1227 09:57:29.946688  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:29.995989  769090 cli_runner.go:164] Run: docker exec force-systemd-env-159617 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:30.096142  769090 oci.go:144] the created container "force-systemd-env-159617" has a running status.
	I1227 09:57:30.096178  769090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa...
	I1227 09:57:30.500317  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.500877  769090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.556340  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.597973  769090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.597993  769090 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159617 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.707985  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.755347  769090 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.755426  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:30.787678  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.788014  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:30.788023  769090 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.789480  769090 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40286->127.0.0.1:33728: read: connection reset by peer
	I1227 09:57:33.935423  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:33.935449  769090 ubuntu.go:182] provisioning hostname "force-systemd-env-159617"
	I1227 09:57:33.935561  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:33.958892  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.959223  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:33.959235  769090 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-159617 && echo "force-systemd-env-159617" | sudo tee /etc/hostname
	I1227 09:57:34.119941  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:34.120013  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.142778  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.143089  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.143106  769090 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-159617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159617/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-159617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.287061  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.287083  769090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.287101  769090 ubuntu.go:190] setting up certificates
	I1227 09:57:34.287154  769090 provision.go:84] configureAuth start
	I1227 09:57:34.287222  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:34.331489  769090 provision.go:143] copyHostCerts
	I1227 09:57:34.331534  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331572  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.331590  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331648  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.331728  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331749  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.331757  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331779  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.331821  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331841  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.331846  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331869  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.331917  769090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159617 san=[127.0.0.1 192.168.85.2 force-systemd-env-159617 localhost minikube]
	I1227 09:57:34.598391  769090 provision.go:177] copyRemoteCerts
	I1227 09:57:34.598509  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.598589  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.616730  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:34.716531  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.716639  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.746980  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.747057  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.766043  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.766100  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.785469  769090 provision.go:87] duration metric: took 498.291074ms to configureAuth
	I1227 09:57:34.785494  769090 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.785662  769090 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.785721  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.802871  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.803337  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.803351  769090 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.967701  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.967720  769090 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.967841  769090 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.967907  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.988654  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.988961  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.989046  769090 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.153832  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.153922  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:35.181379  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.181695  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:35.181712  769090 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.406595  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.148525118 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.406630  769090 machine.go:97] duration metric: took 5.651265169s to provisionDockerMachine
	I1227 09:57:36.406643  769090 client.go:176] duration metric: took 12.985158917s to LocalClient.Create
	I1227 09:57:36.406661  769090 start.go:167] duration metric: took 12.98522367s to libmachine.API.Create "force-systemd-env-159617"
	I1227 09:57:36.406668  769090 start.go:293] postStartSetup for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:36.406681  769090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.406740  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.406784  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.424421  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.529164  769090 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.534359  769090 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.534393  769090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.534406  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.534457  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.534546  769090 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.534559  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.534656  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.545176  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.564519  769090 start.go:296] duration metric: took 157.818194ms for postStartSetup
	I1227 09:57:36.564872  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.582964  769090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json ...
	I1227 09:57:36.583262  769090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.583316  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.603598  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.705489  769090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.712003  769090 start.go:128] duration metric: took 13.295769122s to createHost
	I1227 09:57:36.712030  769090 start.go:83] releasing machines lock for "force-systemd-env-159617", held for 13.295895493s
	I1227 09:57:36.712104  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.735458  769090 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.735509  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.735527  769090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.735606  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.763793  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.767335  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.874762  769090 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.974322  769090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.981372  769090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.981442  769090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:37.027684  769090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:37.027787  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.027825  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.028014  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.048308  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:37.060423  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:37.072092  769090 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:37.072150  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:37.082000  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.091287  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:37.099834  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.120427  769090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.128839  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.139785  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.156006  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.167227  769090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.176858  769090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.188913  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.345099  769090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.452805  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.452846  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.452907  769090 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.474525  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.495905  769090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.546927  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.567236  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.591088  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.608681  769090 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.613473  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.622987  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.639261  769090 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.803450  769090 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.985157  769090 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.985302  769090 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:38.001357  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:38.018865  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:38.182967  769090 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.643595  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.659567  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.676415  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.693157  769090 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.864384  769090 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:39.021630  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.162919  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.195686  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.211669  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.365125  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.465622  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.482004  769090 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.482130  769090 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.486220  769090 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.486340  769090 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.491356  769090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.522612  769090 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.522673  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.553580  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.589853  769090 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.589955  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.609607  769090 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.613910  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.623309  769090 kubeadm.go:884] updating cluster {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.623458  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.623516  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.644906  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.644931  769090 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.644988  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.664959  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.664988  769090 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.664998  769090 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.665088  769090 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-159617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.665158  769090 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.747517  769090 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.747540  769090 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.747563  769090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.747608  769090 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159617 NodeName:force-systemd-env-159617 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.747762  769090 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-159617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.747834  769090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.760575  769090 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.760648  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.775516  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1227 09:57:39.797752  769090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.810219  769090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 09:57:39.828590  769090 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.832469  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.842381  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:40.061511  769090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:40.082736  769090 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617 for IP: 192.168.85.2
	I1227 09:57:40.082833  769090 certs.go:195] generating shared ca certs ...
	I1227 09:57:40.082870  769090 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.083102  769090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:40.083211  769090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:40.083245  769090 certs.go:257] generating profile certs ...
	I1227 09:57:40.083338  769090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key
	I1227 09:57:40.083381  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt with IP's: []
	I1227 09:57:40.290500  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt ...
	I1227 09:57:40.290601  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt: {Name:mkdef657d92ac442b8ca8d24bafb061317e911bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.290877  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key ...
	I1227 09:57:40.290927  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key: {Name:mkd98e7a2fa2573ec393c9c33ed2af8ef854cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.291097  769090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17
	I1227 09:57:40.291156  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:57:40.441193  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 ...
	I1227 09:57:40.441292  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17: {Name:mka639a3de484b92be9c260344df9e8bdedff2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441538  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 ...
	I1227 09:57:40.441579  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17: {Name:mkdfe6ab9be254d46412de6c107cb553d654d1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441720  769090 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt
	I1227 09:57:40.441858  769090 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key
	I1227 09:57:40.441988  769090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key
	I1227 09:57:40.442045  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt with IP's: []
	I1227 09:57:40.780289  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt ...
	I1227 09:57:40.780323  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt: {Name:mk8f859572961556f4c1a1a4febed8df29d82f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780533  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key ...
	I1227 09:57:40.780542  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key: {Name:mk7056050a32483ae445b0ae07006f0562cf0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780640  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.780659  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.780678  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.780691  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.780705  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.780722  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.780742  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.780754  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.780817  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.780867  769090 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.780876  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.780908  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.780938  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.780966  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.781023  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.781067  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.781079  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.781090  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.781688  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.814042  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.838435  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.880890  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.906281  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.928048  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:57:40.950863  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:40.973554  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:40.993400  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.017107  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.037355  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.066525  769090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.095696  769090 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.107307  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.118732  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.132658  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138503  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138605  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.185800  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.193790  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.201492  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.208841  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.216427  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220469  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220555  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.265817  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.273569  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.281083  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.288616  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.296277  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300012  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300113  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.343100  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.351309  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.358883  769090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.362914  769090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.362973  769090 kubeadm.go:401] StartCluster: {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.363101  769090 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.381051  769090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.392106  769090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.400552  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.400659  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.412462  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.412533  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.412612  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.421832  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.421945  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.432909  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.443013  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.443076  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.451990  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.462018  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.462083  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.470161  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.479985  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.480066  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.488640  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.541967  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.544237  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.651990  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.652128  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.652184  769090 kubeadm.go:319] OS: Linux
	I1227 09:57:41.652254  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.652330  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.652403  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.652481  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.652557  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.652636  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.652713  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.652790  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.652862  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.748451  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.748635  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.748758  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.778942  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:41.785794  769090 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.785959  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.786069  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.111543  769090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.252770  769090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.503417  769090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.668993  769090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:43.021398  769090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:43.021831  769090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:43.563328  769090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:43.564051  769090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:43.973250  769090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:44.693761  769090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:44.975792  769090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:44.976216  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:45.527516  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:45.744663  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:45.991918  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:46.189187  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:46.428467  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:46.429216  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:46.432110  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:46.435922  769090 out.go:252]   - Booting up control plane ...
	I1227 09:57:46.436040  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:46.436157  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:46.436262  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:46.453052  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:46.453445  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:46.460773  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:46.461104  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:46.461150  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:46.595002  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:46.595169  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:46.594891  769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000241154s
	I1227 10:01:46.594938  769090 kubeadm.go:319] 
	I1227 10:01:46.595000  769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:46.595036  769090 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:46.595163  769090 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:46.595173  769090 kubeadm.go:319] 
	I1227 10:01:46.595286  769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:46.595323  769090 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:46.595357  769090 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:46.595361  769090 kubeadm.go:319] 
	I1227 10:01:46.600352  769090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:46.600807  769090 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:46.600916  769090 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:46.601157  769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:46.601163  769090 kubeadm.go:319] 
	I1227 10:01:46.601232  769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:46.601345  769090 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000241154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000241154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:46.601418  769090 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:47.049789  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:47.065686  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:47.065751  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:47.078067  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:47.078144  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:47.078247  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:47.088920  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:47.089035  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:47.101290  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:47.111719  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:47.111783  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:47.119486  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.128720  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:47.128889  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.137979  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:47.146623  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:47.146781  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:47.155774  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:47.197997  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:47.198575  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:47.334679  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:47.334774  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:47.334814  769090 kubeadm.go:319] OS: Linux
	I1227 10:01:47.334877  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:47.334937  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:47.335000  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:47.335065  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:47.335164  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:47.335236  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:47.335294  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:47.335359  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:47.335418  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:47.413630  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:47.413746  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:47.413842  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:47.427809  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:47.431698  769090 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:47.431881  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:47.431951  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:47.432047  769090 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:47.432114  769090 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:47.432211  769090 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:47.432286  769090 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:47.432360  769090 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:47.432432  769090 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:47.432512  769090 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:47.432810  769090 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:47.433140  769090 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:47.433248  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:47.584725  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:47.986204  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:48.231719  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:48.868258  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:49.097361  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:49.097857  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:49.100455  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:49.104347  769090 out.go:252]   - Booting up control plane ...
	I1227 10:01:49.104456  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:49.104539  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:49.105527  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:49.125548  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:49.125672  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:49.134446  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:49.134626  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:49.134694  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:49.262884  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:49.263010  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:05:49.268185  769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001365675s
	I1227 10:05:49.268225  769090 kubeadm.go:319] 
	I1227 10:05:49.268518  769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:05:49.268647  769090 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:05:49.268979  769090 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:05:49.268993  769090 kubeadm.go:319] 
	I1227 10:05:49.269184  769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:05:49.269240  769090 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:05:49.269418  769090 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:05:49.269444  769090 kubeadm.go:319] 
	I1227 10:05:49.270052  769090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:49.271047  769090 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:49.271268  769090 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:49.272065  769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 10:05:49.272085  769090 kubeadm.go:319] 
	I1227 10:05:49.272169  769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:05:49.272247  769090 kubeadm.go:403] duration metric: took 8m7.909287482s to StartCluster
	I1227 10:05:49.272292  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:05:49.272367  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:05:49.329549  769090 cri.go:96] found id: ""
	I1227 10:05:49.329594  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.329603  769090 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:05:49.329610  769090 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:05:49.329676  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:05:49.386751  769090 cri.go:96] found id: ""
	I1227 10:05:49.386833  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.386845  769090 logs.go:284] No container was found matching "etcd"
	I1227 10:05:49.386854  769090 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:05:49.386926  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:05:49.449495  769090 cri.go:96] found id: ""
	I1227 10:05:49.449526  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.449534  769090 logs.go:284] No container was found matching "coredns"
	I1227 10:05:49.449541  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:05:49.449594  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:05:49.491413  769090 cri.go:96] found id: ""
	I1227 10:05:49.491448  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.491457  769090 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:05:49.491463  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:05:49.491519  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:05:49.530010  769090 cri.go:96] found id: ""
	I1227 10:05:49.530033  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.530041  769090 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:05:49.530048  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:05:49.530103  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:05:49.560962  769090 cri.go:96] found id: ""
	I1227 10:05:49.560990  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.560998  769090 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:05:49.561005  769090 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:05:49.561059  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:05:49.589143  769090 cri.go:96] found id: ""
	I1227 10:05:49.589164  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.589172  769090 logs.go:284] No container was found matching "kindnet"
	I1227 10:05:49.589183  769090 logs.go:123] Gathering logs for kubelet ...
	I1227 10:05:49.589194  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:05:49.663786  769090 logs.go:123] Gathering logs for dmesg ...
	I1227 10:05:49.663869  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:05:49.679647  769090 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:05:49.679678  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:05:49.789861  769090 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:49.773841    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.774212    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778518    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778831    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.780192    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:05:49.773841    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.774212    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778518    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778831    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.780192    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:05:49.789896  769090 logs.go:123] Gathering logs for Docker ...
	I1227 10:05:49.789911  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 10:05:49.831802  769090 logs.go:123] Gathering logs for container status ...
	I1227 10:05:49.831872  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:05:49.871613  769090 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:05:49.871654  769090 out.go:285] * 
	* 
	W1227 10:05:49.871703  769090 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:49.871713  769090 out.go:285] * 
	* 
	W1227 10:05:49.871969  769090 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:05:49.878027  769090 out.go:203] 
	W1227 10:05:49.880016  769090 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:49.880062  769090 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:05:49.880081  769090 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:05:49.883301  769090 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-159617 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-27 10:05:50.497845476 +0000 UTC m=+2832.664276098
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-159617
helpers_test.go:244: (dbg) docker inspect force-systemd-env-159617:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca",
	        "Created": "2025-12-27T09:57:29.413475531Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 770003,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-27T09:57:29.524162027Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713d59eed1cb7ea89d31e3cbef8f6274a6ab7509a421e96cf0197c334e76398d",
	        "ResolvConfPath": "/var/lib/docker/containers/86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca/hostname",
	        "HostsPath": "/var/lib/docker/containers/86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca/hosts",
	        "LogPath": "/var/lib/docker/containers/86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca/86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca-json.log",
	        "Name": "/force-systemd-env-159617",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-159617:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-159617",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "86e4dccfdd8228754beace569c3a0ee6ca1355ea41978e85dc4c98e629cabfca",
	                "LowerDir": "/var/lib/docker/overlay2/67454a1e34d2f2efb885ba85fb94f5d4c19dbbfab9ed698bc9ad0d978731526a-init/diff:/var/lib/docker/overlay2/9b533b4deb9c1d535741c7522fe23eacc0fb251795d87993eb74f4ff9ff9e74e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/67454a1e34d2f2efb885ba85fb94f5d4c19dbbfab9ed698bc9ad0d978731526a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/67454a1e34d2f2efb885ba85fb94f5d4c19dbbfab9ed698bc9ad0d978731526a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/67454a1e34d2f2efb885ba85fb94f5d4c19dbbfab9ed698bc9ad0d978731526a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-159617",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-159617/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-159617",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-159617",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-159617",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "21659d24896289f4c7a7437385ac6a309fc785f6bca4cf54e6e7c5bc9cdcd756",
	            "SandboxKey": "/var/run/docker/netns/21659d248962",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33728"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33729"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33730"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33731"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-159617": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:42:6b:f7:7f:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "69485b6f506b6f45a4112269f11fae4ad75bf30dc1028b1c3d70bbaa0ec83fd1",
	                    "EndpointID": "a7686857a4e57c2718ee3ef11466a621b2597d601d47ae78a11b60f9e038c956",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-159617",
	                        "86e4dccfdd82"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-159617 -n force-systemd-env-159617
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-159617 -n force-systemd-env-159617: exit status 6 (435.275879ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:05:50.911340  782241 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-159617" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-159617 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p offline-docker-663445                                                                                                      │ offline-docker-663445     │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
	│ ssh     │ -p cilium-334346 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat docker --no-pager                                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/docker/daemon.json                                                                             │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo docker system info                                                                                      │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cri-dockerd --version                                                                                   │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat containerd --no-pager                                                                     │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo cat /etc/containerd/config.toml                                                                         │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo containerd config dump                                                                                  │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo systemctl cat crio --no-pager                                                                           │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ -p cilium-334346 sudo crio config                                                                                             │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ delete  │ -p cilium-334346                                                                                                              │ cilium-334346             │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │ 27 Dec 25 09:57 UTC │
	│ start   │ -p force-systemd-env-159617 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-159617  │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ start   │ -p force-systemd-flag-574701 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 09:57 UTC │                     │
	│ ssh     │ force-systemd-flag-574701 ssh docker info --format {{.CgroupDriver}}                                                          │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	│ delete  │ -p force-systemd-flag-574701                                                                                                  │ force-systemd-flag-574701 │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │                     │
	│ ssh     │ force-systemd-env-159617 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-159617  │ jenkins │ v1.37.0 │ 27 Dec 25 10:05 UTC │ 27 Dec 25 10:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:57:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:57:23.854045  769388 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:57:23.854214  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854225  769388 out.go:374] Setting ErrFile to fd 2...
	I1227 09:57:23.854241  769388 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:57:23.854500  769388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:57:23.854935  769388 out.go:368] Setting JSON to false
	I1227 09:57:23.855775  769388 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16795,"bootTime":1766812649,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:57:23.855839  769388 start.go:143] virtualization:  
	I1227 09:57:23.860623  769388 out.go:179] * [force-systemd-flag-574701] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:57:23.864301  769388 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:57:23.864369  769388 notify.go:221] Checking for updates...
	I1227 09:57:23.871858  769388 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:57:23.879831  769388 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:57:23.884111  769388 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:57:23.887027  769388 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:57:23.890016  769388 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:57:23.893523  769388 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:23.893679  769388 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:57:23.942486  769388 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:57:23.942607  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.033935  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:58 SystemTime:2025-12-27 09:57:24.020858019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.034041  769388 docker.go:319] overlay module found
	I1227 09:57:24.037348  769388 out.go:179] * Using the docker driver based on user configuration
	I1227 09:57:24.040109  769388 start.go:309] selected driver: docker
	I1227 09:57:24.040131  769388 start.go:928] validating driver "docker" against <nil>
	I1227 09:57:24.040145  769388 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:57:24.040848  769388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:57:24.119453  769388 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:48 SystemTime:2025-12-27 09:57:24.103606726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:57:24.119606  769388 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:57:24.119820  769388 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:57:24.124043  769388 out.go:179] * Using Docker driver with root privileges
	I1227 09:57:24.126916  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:24.126993  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:24.127014  769388 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 09:57:24.127097  769388 start.go:353] cluster config:
	{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:24.130340  769388 out.go:179] * Starting "force-systemd-flag-574701" primary control-plane node in "force-systemd-flag-574701" cluster
	I1227 09:57:24.133152  769388 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 09:57:24.136080  769388 out.go:179] * Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:57:24.140060  769388 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:57:24.140141  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.140165  769388 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1227 09:57:24.140177  769388 cache.go:65] Caching tarball of preloaded images
	I1227 09:57:24.140256  769388 preload.go:251] Found /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1227 09:57:24.140271  769388 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1227 09:57:24.140383  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:24.140406  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json: {Name:mk4143ebcade308fb419077e3f8332f378dc7937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:24.161069  769388 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon, skipping pull
	I1227 09:57:24.161091  769388 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a exists in daemon, skipping load
	I1227 09:57:24.161109  769388 cache.go:243] Successfully downloaded all kic artifacts
	I1227 09:57:24.161140  769388 start.go:360] acquireMachinesLock for force-systemd-flag-574701: {Name:mkf48a67b67df727c9d74e45482507e00be21327 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1227 09:57:24.161254  769388 start.go:364] duration metric: took 93.536µs to acquireMachinesLock for "force-systemd-flag-574701"
	I1227 09:57:24.161290  769388 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1227 09:57:24.161353  769388 start.go:125] createHost starting for "" (driver="docker")
	I1227 09:57:23.421132  769090 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:23.421440  769090 start.go:159] libmachine.API.Create for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:23.421474  769090 client.go:173] LocalClient.Create starting
	I1227 09:57:23.421564  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:23.421635  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421681  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.421760  769090 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:23.421803  769090 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:23.421839  769090 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:23.422293  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:23.444615  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:23.444701  769090 network_create.go:284] running [docker network inspect force-systemd-env-159617] to gather additional debugging logs...
	I1227 09:57:23.444722  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617
	W1227 09:57:23.469730  769090 cli_runner.go:211] docker network inspect force-systemd-env-159617 returned with exit code 1
	I1227 09:57:23.469759  769090 network_create.go:287] error running [docker network inspect force-systemd-env-159617]: docker network inspect force-systemd-env-159617: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-159617 not found
	I1227 09:57:23.469771  769090 network_create.go:289] output of [docker network inspect force-systemd-env-159617]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-159617 not found
	
	** /stderr **
	I1227 09:57:23.469879  769090 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:23.484995  769090 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:23.485264  769090 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:23.485535  769090 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:23.485842  769090 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-74a76dba2194 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:01:b7:05:f7:b5} reservation:<nil>}
	I1227 09:57:23.486201  769090 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a47360}
	I1227 09:57:23.486220  769090 network_create.go:124] attempt to create docker network force-systemd-env-159617 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1227 09:57:23.486272  769090 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159617 force-systemd-env-159617
	I1227 09:57:23.588843  769090 network_create.go:108] docker network force-systemd-env-159617 192.168.85.0/24 created
	I1227 09:57:23.588880  769090 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159617" container
	I1227 09:57:23.588951  769090 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:23.607164  769090 cli_runner.go:164] Run: docker volume create force-systemd-env-159617 --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:23.627044  769090 oci.go:103] Successfully created a docker volume force-systemd-env-159617
	I1227 09:57:23.627271  769090 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159617-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --entrypoint /usr/bin/test -v force-systemd-env-159617:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:24.208049  769090 oci.go:107] Successfully prepared a docker volume force-systemd-env-159617
	I1227 09:57:24.208115  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:24.208125  769090 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:24.208197  769090 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:24.165884  769388 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1227 09:57:24.166208  769388 start.go:159] libmachine.API.Create for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:24.166249  769388 client.go:173] LocalClient.Create starting
	I1227 09:57:24.166322  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem
	I1227 09:57:24.166357  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166372  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166421  769388 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem
	I1227 09:57:24.166486  769388 main.go:144] libmachine: Decoding PEM data...
	I1227 09:57:24.166501  769388 main.go:144] libmachine: Parsing certificate...
	I1227 09:57:24.166999  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1227 09:57:24.184851  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1227 09:57:24.184931  769388 network_create.go:284] running [docker network inspect force-systemd-flag-574701] to gather additional debugging logs...
	I1227 09:57:24.184947  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701
	W1227 09:57:24.201338  769388 cli_runner.go:211] docker network inspect force-systemd-flag-574701 returned with exit code 1
	I1227 09:57:24.201367  769388 network_create.go:287] error running [docker network inspect force-systemd-flag-574701]: docker network inspect force-systemd-flag-574701: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-574701 not found
	I1227 09:57:24.201381  769388 network_create.go:289] output of [docker network inspect force-systemd-flag-574701]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-574701 not found
	
	** /stderr **
	I1227 09:57:24.201475  769388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:24.231038  769388 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
	I1227 09:57:24.231335  769388 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28c67d556586 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:3f:02:85:ee:bb} reservation:<nil>}
	I1227 09:57:24.231654  769388 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fae86aeafbd6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:6c:82:80:bb:6d} reservation:<nil>}
	I1227 09:57:24.232203  769388 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d880}
	I1227 09:57:24.232227  769388 network_create.go:124] attempt to create docker network force-systemd-flag-574701 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1227 09:57:24.232294  769388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-574701 force-systemd-flag-574701
	I1227 09:57:24.312633  769388 network_create.go:108] docker network force-systemd-flag-574701 192.168.76.0/24 created
	I1227 09:57:24.312662  769388 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-574701" container
	I1227 09:57:24.312733  769388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1227 09:57:24.330428  769388 cli_runner.go:164] Run: docker volume create force-systemd-flag-574701 --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true
	I1227 09:57:24.354470  769388 oci.go:103] Successfully created a docker volume force-systemd-flag-574701
	I1227 09:57:24.354571  769388 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-574701-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --entrypoint /usr/bin/test -v force-systemd-flag-574701:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -d /var/lib
	I1227 09:57:25.150777  769388 oci.go:107] Successfully prepared a docker volume force-systemd-flag-574701
	I1227 09:57:25.150847  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:25.150858  769388 kic.go:194] Starting extracting preloaded images to volume ...
	I1227 09:57:25.150937  769388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir
	I1227 09:57:29.290594  769090 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159617:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (5.082338598s)
	I1227 09:57:29.290643  769090 kic.go:203] duration metric: took 5.082509768s to extract preloaded images to volume ...
	W1227 09:57:29.290794  769090 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.290951  769090 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.395948  769090 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159617 --name force-systemd-env-159617 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159617 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159617 --network force-systemd-env-159617 --ip 192.168.85.2 --volume force-systemd-env-159617:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.916266  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Running}}
	I1227 09:57:29.946688  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:29.995989  769090 cli_runner.go:164] Run: docker exec force-systemd-env-159617 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:30.096142  769090 oci.go:144] the created container "force-systemd-env-159617" has a running status.
	I1227 09:57:30.096178  769090 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa...
	I1227 09:57:30.500317  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.500877  769090 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.556340  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.597973  769090 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.597993  769090 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159617 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.707985  769090 cli_runner.go:164] Run: docker container inspect force-systemd-env-159617 --format={{.State.Status}}
	I1227 09:57:30.755347  769090 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.755426  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:30.787678  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.788014  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:30.788023  769090 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.789480  769090 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40286->127.0.0.1:33728: read: connection reset by peer
	I1227 09:57:29.285806  769388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-574701:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a -I lz4 -xf /preloaded.tar -C /extractDir: (4.134820012s)
	I1227 09:57:29.285838  769388 kic.go:203] duration metric: took 4.134977669s to extract preloaded images to volume ...
	W1227 09:57:29.285987  769388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1227 09:57:29.286133  769388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1227 09:57:29.373204  769388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-574701 --name force-systemd-flag-574701 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-574701 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-574701 --network force-systemd-flag-574701 --ip 192.168.76.2 --volume force-systemd-flag-574701:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a
	I1227 09:57:29.767688  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Running}}
	I1227 09:57:29.794873  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:29.823050  769388 cli_runner.go:164] Run: docker exec force-systemd-flag-574701 stat /var/lib/dpkg/alternatives/iptables
	I1227 09:57:29.890557  769388 oci.go:144] the created container "force-systemd-flag-574701" has a running status.
	I1227 09:57:29.890594  769388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa...
	I1227 09:57:30.464624  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1227 09:57:30.464726  769388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1227 09:57:30.506648  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.563495  769388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1227 09:57:30.563516  769388 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-574701 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1227 09:57:30.675307  769388 cli_runner.go:164] Run: docker container inspect force-systemd-flag-574701 --format={{.State.Status}}
	I1227 09:57:30.705027  769388 machine.go:94] provisionDockerMachine start ...
	I1227 09:57:30.705109  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:30.748542  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:30.748883  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:30.748899  769388 main.go:144] libmachine: About to run SSH command:
	hostname
	I1227 09:57:30.749537  769388 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1227 09:57:33.935423  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:33.935449  769090 ubuntu.go:182] provisioning hostname "force-systemd-env-159617"
	I1227 09:57:33.935561  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:33.958892  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.959223  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:33.959235  769090 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-159617 && echo "force-systemd-env-159617" | sudo tee /etc/hostname
	I1227 09:57:34.119941  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159617
	
	I1227 09:57:34.120013  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.142778  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.143089  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.143106  769090 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-159617' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159617/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-159617' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.287061  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.287083  769090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.287101  769090 ubuntu.go:190] setting up certificates
	I1227 09:57:34.287154  769090 provision.go:84] configureAuth start
	I1227 09:57:34.287222  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:34.331489  769090 provision.go:143] copyHostCerts
	I1227 09:57:34.331534  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331572  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.331590  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.331648  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.331728  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331749  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.331757  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.331779  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.331821  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331841  769090 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.331846  769090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.331869  769090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.331917  769090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159617 san=[127.0.0.1 192.168.85.2 force-systemd-env-159617 localhost minikube]
	I1227 09:57:34.598391  769090 provision.go:177] copyRemoteCerts
	I1227 09:57:34.598509  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.598589  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.616730  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:34.716531  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.716639  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.746980  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.747057  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.766043  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.766100  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.785469  769090 provision.go:87] duration metric: took 498.291074ms to configureAuth
	I1227 09:57:34.785494  769090 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.785662  769090 config.go:182] Loaded profile config "force-systemd-env-159617": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.785721  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.802871  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.803337  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.803351  769090 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.967701  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.967720  769090 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.967841  769090 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.967907  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:34.988654  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.988961  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:34.989046  769090 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.153832  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.153922  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:35.181379  769090 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.181695  769090 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33728 <nil> <nil>}
	I1227 09:57:35.181712  769090 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.406595  769090 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.148525118 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.406630  769090 machine.go:97] duration metric: took 5.651265169s to provisionDockerMachine
	I1227 09:57:36.406643  769090 client.go:176] duration metric: took 12.985158917s to LocalClient.Create
	I1227 09:57:36.406661  769090 start.go:167] duration metric: took 12.98522367s to libmachine.API.Create "force-systemd-env-159617"
	I1227 09:57:36.406668  769090 start.go:293] postStartSetup for "force-systemd-env-159617" (driver="docker")
	I1227 09:57:36.406681  769090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.406740  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.406784  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.424421  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.529164  769090 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.534359  769090 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.534393  769090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.534406  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.534457  769090 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.534546  769090 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.534559  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.534656  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.545176  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.564519  769090 start.go:296] duration metric: took 157.818194ms for postStartSetup
	I1227 09:57:36.564872  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.582964  769090 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/config.json ...
	I1227 09:57:36.583262  769090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.583316  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.603598  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.705489  769090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.712003  769090 start.go:128] duration metric: took 13.295769122s to createHost
	I1227 09:57:36.712030  769090 start.go:83] releasing machines lock for "force-systemd-env-159617", held for 13.295895493s
	I1227 09:57:36.712104  769090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159617
	I1227 09:57:36.735458  769090 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.735509  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.735527  769090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.735606  769090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159617
	I1227 09:57:36.763793  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.767335  769090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33728 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-env-159617/id_rsa Username:docker}
	I1227 09:57:36.874762  769090 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.974322  769090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.981372  769090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.981442  769090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:37.027684  769090 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:37.027787  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.027825  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.028014  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.048308  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:37.060423  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:37.072092  769090 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:37.072150  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:37.082000  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.091287  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:37.099834  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.120427  769090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.128839  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.139785  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.156006  769090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.167227  769090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.176858  769090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.188913  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.345099  769090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.452805  769090 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.452846  769090 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.452907  769090 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.474525  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.495905  769090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.546927  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.567236  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.591088  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.608681  769090 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.613473  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.622987  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.639261  769090 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.803450  769090 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.985157  769090 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.985302  769090 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:38.001357  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:38.018865  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:33.902589  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:33.902611  769388 ubuntu.go:182] provisioning hostname "force-systemd-flag-574701"
	I1227 09:57:33.902682  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:33.920165  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:33.920469  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:33.920480  769388 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-574701 && echo "force-systemd-flag-574701" | sudo tee /etc/hostname
	I1227 09:57:34.085277  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-574701
	
	I1227 09:57:34.085356  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.102383  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.102698  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.102716  769388 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-574701' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-574701/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-574701' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1227 09:57:34.255031  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1227 09:57:34.255059  769388 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22343-548332/.minikube CaCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22343-548332/.minikube}
	I1227 09:57:34.255083  769388 ubuntu.go:190] setting up certificates
	I1227 09:57:34.255093  769388 provision.go:84] configureAuth start
	I1227 09:57:34.255175  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:34.271814  769388 provision.go:143] copyHostCerts
	I1227 09:57:34.271855  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271887  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem, removing ...
	I1227 09:57:34.271900  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem
	I1227 09:57:34.271973  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/ca.pem (1078 bytes)
	I1227 09:57:34.272067  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272089  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem, removing ...
	I1227 09:57:34.272097  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem
	I1227 09:57:34.272126  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/cert.pem (1123 bytes)
	I1227 09:57:34.272178  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272198  769388 exec_runner.go:144] found /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem, removing ...
	I1227 09:57:34.272205  769388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem
	I1227 09:57:34.272232  769388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22343-548332/.minikube/key.pem (1675 bytes)
	I1227 09:57:34.272293  769388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-574701 san=[127.0.0.1 192.168.76.2 force-systemd-flag-574701 localhost minikube]
	I1227 09:57:34.545510  769388 provision.go:177] copyRemoteCerts
	I1227 09:57:34.545576  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1227 09:57:34.545630  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.562287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:34.663483  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1227 09:57:34.663552  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1227 09:57:34.681829  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1227 09:57:34.681902  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1227 09:57:34.701079  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1227 09:57:34.701139  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1227 09:57:34.722250  769388 provision.go:87] duration metric: took 467.13373ms to configureAuth
	I1227 09:57:34.722280  769388 ubuntu.go:206] setting minikube options for container-runtime
	I1227 09:57:34.722503  769388 config.go:182] Loaded profile config "force-systemd-flag-574701": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:57:34.722587  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.748482  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.748825  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.748842  769388 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1227 09:57:34.911917  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1227 09:57:34.911937  769388 ubuntu.go:71] root file system type: overlay
	I1227 09:57:34.912090  769388 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1227 09:57:34.912153  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:34.931590  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:34.931909  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:34.931998  769388 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1227 09:57:35.094955  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1227 09:57:35.095071  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:35.115477  769388 main.go:144] libmachine: Using SSH client type: native
	I1227 09:57:35.115820  769388 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33723 <nil> <nil>}
	I1227 09:57:35.115843  769388 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1227 09:57:36.313708  769388 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-27 09:57:35.088526773 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1227 09:57:36.313732  769388 machine.go:97] duration metric: took 5.608683566s to provisionDockerMachine
	I1227 09:57:36.313745  769388 client.go:176] duration metric: took 12.147489846s to LocalClient.Create
	I1227 09:57:36.313757  769388 start.go:167] duration metric: took 12.14755212s to libmachine.API.Create "force-systemd-flag-574701"
	I1227 09:57:36.313768  769388 start.go:293] postStartSetup for "force-systemd-flag-574701" (driver="docker")
	I1227 09:57:36.313777  769388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1227 09:57:36.313843  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1227 09:57:36.313894  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.333968  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.436051  769388 ssh_runner.go:195] Run: cat /etc/os-release
	I1227 09:57:36.439811  769388 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1227 09:57:36.439837  769388 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1227 09:57:36.439848  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/addons for local assets ...
	I1227 09:57:36.439901  769388 filesync.go:126] Scanning /home/jenkins/minikube-integration/22343-548332/.minikube/files for local assets ...
	I1227 09:57:36.439994  769388 filesync.go:149] local asset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> 5501972.pem in /etc/ssl/certs
	I1227 09:57:36.440010  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /etc/ssl/certs/5501972.pem
	I1227 09:57:36.440117  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1227 09:57:36.449353  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:36.472877  769388 start.go:296] duration metric: took 159.095049ms for postStartSetup
	I1227 09:57:36.473245  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.490073  769388 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/config.json ...
	I1227 09:57:36.490364  769388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:57:36.490419  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.508708  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.616568  769388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1227 09:57:36.622218  769388 start.go:128] duration metric: took 12.460850316s to createHost
	I1227 09:57:36.622246  769388 start.go:83] releasing machines lock for "force-systemd-flag-574701", held for 12.460980323s
	I1227 09:57:36.622323  769388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-574701
	I1227 09:57:36.641788  769388 ssh_runner.go:195] Run: cat /version.json
	I1227 09:57:36.641849  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.642098  769388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1227 09:57:36.642163  769388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-574701
	I1227 09:57:36.664287  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.672747  769388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33723 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/force-systemd-flag-574701/id_rsa Username:docker}
	I1227 09:57:36.780184  769388 ssh_runner.go:195] Run: systemctl --version
	I1227 09:57:36.880930  769388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1227 09:57:36.887011  769388 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1227 09:57:36.887080  769388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1227 09:57:36.924112  769388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1227 09:57:36.924139  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:36.924152  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:36.924252  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:36.946873  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1227 09:57:36.956487  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1227 09:57:36.966480  769388 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1227 09:57:36.966545  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1227 09:57:36.977403  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:36.987483  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1227 09:57:36.998514  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1227 09:57:37.010694  769388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1227 09:57:37.022875  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1227 09:57:37.036011  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1227 09:57:37.044803  769388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1227 09:57:37.054260  769388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1227 09:57:37.063604  769388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1227 09:57:37.071796  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:37.216587  769388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1227 09:57:37.323467  769388 start.go:496] detecting cgroup driver to use...
	I1227 09:57:37.323492  769388 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1227 09:57:37.323546  769388 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1227 09:57:37.352336  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.365635  769388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1227 09:57:37.402353  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1227 09:57:37.420004  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1227 09:57:37.441069  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1227 09:57:37.461000  769388 ssh_runner.go:195] Run: which cri-dockerd
	I1227 09:57:37.468781  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1227 09:57:37.477924  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1227 09:57:37.502109  769388 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1227 09:57:37.672967  769388 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1227 09:57:37.840323  769388 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1227 09:57:37.840416  769388 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1227 09:57:37.872525  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1227 09:57:37.886221  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:38.039548  769388 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.563380  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.577307  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.592258  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.608999  769388 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.783640  769388 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:38.955435  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.116493  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.131867  769388 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.146438  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.292670  769388 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.371970  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.392203  769388 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.392325  769388 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.396824  769388 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.396962  769388 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.400890  769388 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.425825  769388 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.425938  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.452940  769388 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:38.182967  769090 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1227 09:57:38.643595  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1227 09:57:38.659567  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1227 09:57:38.676415  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:38.693157  769090 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1227 09:57:38.864384  769090 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1227 09:57:39.021630  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.162919  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1227 09:57:39.195686  769090 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1227 09:57:39.211669  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.365125  769090 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1227 09:57:39.465622  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1227 09:57:39.482004  769090 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1227 09:57:39.482130  769090 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1227 09:57:39.486220  769090 start.go:574] Will wait 60s for crictl version
	I1227 09:57:39.486340  769090 ssh_runner.go:195] Run: which crictl
	I1227 09:57:39.491356  769090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1227 09:57:39.522612  769090 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1227 09:57:39.522673  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.553580  769090 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1227 09:57:39.589853  769090 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.589955  769090 cli_runner.go:164] Run: docker network inspect force-systemd-env-159617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.609607  769090 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.613910  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.623309  769090 kubeadm.go:884] updating cluster {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.623458  769090 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.623516  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.644906  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.644931  769090 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.644988  769090 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.664959  769090 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.664988  769090 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.664998  769090 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.665088  769090 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-159617 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.665158  769090 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.747517  769090 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.747540  769090 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.747563  769090 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.747608  769090 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159617 NodeName:force-systemd-env-159617 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.747762  769090 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-159617"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.747834  769090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.760575  769090 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.760648  769090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.775516  769090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1227 09:57:39.797752  769090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.810219  769090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1227 09:57:39.828590  769090 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.832469  769090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.842381  769090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:40.061511  769090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:40.082736  769090 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617 for IP: 192.168.85.2
	I1227 09:57:40.082833  769090 certs.go:195] generating shared ca certs ...
	I1227 09:57:40.082870  769090 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.083102  769090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:40.083211  769090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:40.083245  769090 certs.go:257] generating profile certs ...
	I1227 09:57:40.083338  769090 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key
	I1227 09:57:40.083381  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt with IP's: []
	I1227 09:57:40.290500  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt ...
	I1227 09:57:40.290601  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.crt: {Name:mkdef657d92ac442b8ca8d24bafb061317e911bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.290877  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key ...
	I1227 09:57:40.290927  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/client.key: {Name:mkd98e7a2fa2573ec393c9c33ed2af8ef854cd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.291097  769090 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17
	I1227 09:57:40.291156  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1227 09:57:40.441193  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 ...
	I1227 09:57:40.441292  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17: {Name:mka639a3de484b92be9c260344df9e8bdedff2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441538  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 ...
	I1227 09:57:40.441579  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17: {Name:mkdfe6ab9be254d46412de6c107cb553d654d1d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.441720  769090 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt
	I1227 09:57:40.441858  769090 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key.c11c0d17 -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key
	I1227 09:57:40.441988  769090 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key
	I1227 09:57:40.442045  769090 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt with IP's: []
	I1227 09:57:40.780289  769090 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt ...
	I1227 09:57:40.780323  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt: {Name:mk8f859572961556f4c1a1a4febed8df29d82f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780533  769090 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key ...
	I1227 09:57:40.780542  769090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key: {Name:mk7056050a32483ae445b0ae07006f0562cf0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780640  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.780659  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.780678  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.780691  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.780705  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.780722  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.780742  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.780754  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.780817  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.780867  769090 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.780876  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.780908  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.780938  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.780966  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.781023  769090 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.781067  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.781079  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.781090  769090 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.781688  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.814042  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.838435  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.880890  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.906281  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.928048  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1227 09:57:40.950863  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:40.973554  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-env-159617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:40.993400  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.017107  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.037355  769090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.066525  769090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.095696  769090 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.107307  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.118732  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.132658  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138503  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.138605  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.185800  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.193790  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.201492  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.208841  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.216427  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220469  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.220555  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.265817  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.273569  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.281083  769090 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.288616  769090 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.296277  769090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300012  769090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.300113  769090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.343100  769090 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.351309  769090 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.358883  769090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.362914  769090 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.362973  769090 kubeadm.go:401] StartCluster: {Name:force-systemd-env-159617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-159617 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.363101  769090 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.381051  769090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.392106  769090 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.400552  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.400659  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.412462  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.412533  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.412612  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.421832  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.421945  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.432909  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.443013  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.443076  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.451990  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.462018  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.462083  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.470161  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.479985  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.480066  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.488640  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.541967  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.544237  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.651990  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.652128  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.652184  769090 kubeadm.go:319] OS: Linux
	I1227 09:57:41.652254  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.652330  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.652403  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.652481  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.652557  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.652636  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.652713  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.652790  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.652862  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.748451  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.748635  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.748758  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.778942  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:39.487385  769388 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1227 09:57:39.487511  769388 cli_runner.go:164] Run: docker network inspect force-systemd-flag-574701 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1227 09:57:39.509398  769388 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1227 09:57:39.513521  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.525777  769388 kubeadm.go:884] updating cluster {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1227 09:57:39.525889  769388 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1227 09:57:39.525945  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.550774  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.550799  769388 docker.go:624] Images already preloaded, skipping extraction
	I1227 09:57:39.550866  769388 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1227 09:57:39.574219  769388 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1227 09:57:39.574242  769388 cache_images.go:86] Images are preloaded, skipping loading
	I1227 09:57:39.574252  769388 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1227 09:57:39.574354  769388 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-574701 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1227 09:57:39.574415  769388 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1227 09:57:39.642105  769388 cni.go:84] Creating CNI manager for ""
	I1227 09:57:39.642130  769388 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:57:39.642146  769388 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1227 09:57:39.642167  769388 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-574701 NodeName:force-systemd-flag-574701 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1227 09:57:39.642292  769388 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-574701"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1227 09:57:39.642363  769388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1227 09:57:39.651846  769388 binaries.go:51] Found k8s binaries, skipping transfer
	I1227 09:57:39.651910  769388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1227 09:57:39.661240  769388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1227 09:57:39.677750  769388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1227 09:57:39.692714  769388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1227 09:57:39.705586  769388 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1227 09:57:39.709624  769388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1227 09:57:39.719304  769388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1227 09:57:39.872388  769388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1227 09:57:39.905933  769388 certs.go:69] Setting up /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701 for IP: 192.168.76.2
	I1227 09:57:39.905958  769388 certs.go:195] generating shared ca certs ...
	I1227 09:57:39.905975  769388 certs.go:227] acquiring lock for ca certs: {Name:mka57d8b1d581d5829589e9bbd771e6117908cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:39.906194  769388 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key
	I1227 09:57:39.906270  769388 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key
	I1227 09:57:39.906284  769388 certs.go:257] generating profile certs ...
	I1227 09:57:39.906359  769388 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key
	I1227 09:57:39.906376  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt with IP's: []
	I1227 09:57:40.185176  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt ...
	I1227 09:57:40.185209  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.crt: {Name:mkd8df8f694ab6bd0be298ca10765d50a0840ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185510  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key ...
	I1227 09:57:40.185530  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/client.key: {Name:mkedfb2c92eeb1c8634de35cfef29ff1eb8c71f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.185683  769388 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a
	I1227 09:57:40.185706  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1227 09:57:40.780814  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a ...
	I1227 09:57:40.780832  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a: {Name:mk220ae28824c87aa5d8ba64a794d883980a39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.780959  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a ...
	I1227 09:57:40.780966  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a: {Name:mkac97d48f25e58d566aafd93cbcf157b2cb0117 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.781034  769388 certs.go:382] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt
	I1227 09:57:40.781140  769388 certs.go:386] copying /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key.b6598b4a -> /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key
	I1227 09:57:40.781206  769388 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key
	I1227 09:57:40.781219  769388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt with IP's: []
	I1227 09:57:40.864310  769388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt ...
	I1227 09:57:40.864342  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt: {Name:mk5dc7c59c3dfc68c7c8e2186f25c0bda8c48900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864549  769388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key ...
	I1227 09:57:40.864569  769388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key: {Name:mk7098be4d9c15bf1f3c8453e90bcc9388cdc9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:57:40.864678  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1227 09:57:40.864715  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1227 09:57:40.864736  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1227 09:57:40.864755  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1227 09:57:40.864768  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1227 09:57:40.864796  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1227 09:57:40.864821  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1227 09:57:40.864837  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1227 09:57:40.864913  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem (1338 bytes)
	W1227 09:57:40.864990  769388 certs.go:480] ignoring /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197_empty.pem, impossibly tiny 0 bytes
	I1227 09:57:40.865007  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca-key.pem (1679 bytes)
	I1227 09:57:40.865038  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/ca.pem (1078 bytes)
	I1227 09:57:40.865102  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/cert.pem (1123 bytes)
	I1227 09:57:40.865134  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/key.pem (1675 bytes)
	I1227 09:57:40.865199  769388 certs.go:484] found cert: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem (1708 bytes)
	I1227 09:57:40.865244  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:40.865267  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem -> /usr/share/ca-certificates/550197.pem
	I1227 09:57:40.865282  769388 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem -> /usr/share/ca-certificates/5501972.pem
	I1227 09:57:40.865799  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1227 09:57:40.898569  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1227 09:57:40.927873  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1227 09:57:40.948313  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1227 09:57:40.969255  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1227 09:57:40.989875  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1227 09:57:41.010787  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1227 09:57:41.031724  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/force-systemd-flag-574701/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1227 09:57:41.051433  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1227 09:57:41.077779  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/certs/550197.pem --> /usr/share/ca-certificates/550197.pem (1338 bytes)
	I1227 09:57:41.108786  769388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/ssl/certs/5501972.pem --> /usr/share/ca-certificates/5501972.pem (1708 bytes)
	I1227 09:57:41.133210  769388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1227 09:57:41.147828  769388 ssh_runner.go:195] Run: openssl version
	I1227 09:57:41.154460  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.161904  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/5501972.pem /etc/ssl/certs/5501972.pem
	I1227 09:57:41.169300  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173499  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 27 09:25 /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.173602  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5501972.pem
	I1227 09:57:41.219730  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.227914  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/5501972.pem /etc/ssl/certs/3ec20f2e.0
	I1227 09:57:41.234863  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.242037  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1227 09:57:41.252122  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256231  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 27 09:19 /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.256330  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1227 09:57:41.303396  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1227 09:57:41.311657  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1227 09:57:41.319645  769388 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.327015  769388 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/550197.pem /etc/ssl/certs/550197.pem
	I1227 09:57:41.334332  769388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338256  769388 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 27 09:25 /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.338360  769388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/550197.pem
	I1227 09:57:41.382878  769388 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1227 09:57:41.390786  769388 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/550197.pem /etc/ssl/certs/51391683.0
	I1227 09:57:41.399024  769388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1227 09:57:41.403779  769388 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1227 09:57:41.403832  769388 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-574701 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-574701 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:57:41.403946  769388 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1227 09:57:41.429145  769388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1227 09:57:41.439644  769388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1227 09:57:41.448769  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 09:57:41.448834  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 09:57:41.460465  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 09:57:41.460481  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 09:57:41.460550  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 09:57:41.471042  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 09:57:41.471103  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 09:57:41.480178  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 09:57:41.490398  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 09:57:41.490464  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 09:57:41.499105  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.510257  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 09:57:41.510321  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 09:57:41.520923  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 09:57:41.534256  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 09:57:41.534333  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 09:57:41.542461  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 09:57:41.646824  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 09:57:41.648335  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 09:57:41.753889  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 09:57:41.754015  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 09:57:41.754079  769388 kubeadm.go:319] OS: Linux
	I1227 09:57:41.754162  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 09:57:41.754242  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 09:57:41.754318  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 09:57:41.754400  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 09:57:41.754479  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 09:57:41.754553  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 09:57:41.754656  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 09:57:41.754726  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 09:57:41.754805  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 09:57:41.836243  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 09:57:41.836443  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 09:57:41.836586  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 09:57:41.855494  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 09:57:41.785794  769090 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.785959  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.786069  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.111543  769090 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.252770  769090 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.503417  769090 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.668993  769090 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:43.021398  769090 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:43.021831  769090 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:41.860963  769388 out.go:252]   - Generating certificates and keys ...
	I1227 09:57:41.861090  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 09:57:41.861187  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 09:57:42.027134  769388 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1227 09:57:42.183308  769388 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1227 09:57:42.275495  769388 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1227 09:57:42.538151  769388 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1227 09:57:42.689457  769388 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1227 09:57:42.690078  769388 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.729913  769388 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:42.730516  769388 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1227 09:57:42.981667  769388 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:43.099131  769388 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:43.810479  769388 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:43.811011  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:44.109743  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:44.315485  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:44.540089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:44.694926  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:45.077270  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:45.080386  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:45.089864  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:43.563328  769090 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1227 09:57:43.564051  769090 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1227 09:57:43.973250  769090 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1227 09:57:44.693761  769090 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1227 09:57:44.975792  769090 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1227 09:57:44.976216  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 09:57:45.527516  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 09:57:45.744663  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 09:57:45.991918  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 09:57:46.189187  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 09:57:46.428467  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 09:57:46.429216  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 09:57:46.432110  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 09:57:46.435922  769090 out.go:252]   - Booting up control plane ...
	I1227 09:57:46.436040  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:46.436157  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:46.436262  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:46.453052  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:46.453445  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:46.460773  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:46.461104  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:46.461150  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:46.595002  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:46.595169  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 09:57:45.093574  769388 out.go:252]   - Booting up control plane ...
	I1227 09:57:45.095563  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 09:57:45.097773  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 09:57:45.099785  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 09:57:45.145757  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 09:57:45.145889  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 09:57:45.157698  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 09:57:45.158555  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 09:57:45.158619  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 09:57:45.405440  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 09:57:45.405562  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:45.399682  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001476405s
	I1227 10:01:45.399725  769388 kubeadm.go:319] 
	I1227 10:01:45.399789  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:45.399827  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:45.399942  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:45.399950  769388 kubeadm.go:319] 
	I1227 10:01:45.400064  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:45.400098  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:45.400133  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:45.400138  769388 kubeadm.go:319] 
	I1227 10:01:45.404789  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:45.405218  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:45.405332  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:45.405567  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:45.405577  769388 kubeadm.go:319] 
	I1227 10:01:45.405646  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:45.405800  769388 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-574701 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001476405s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:45.405885  769388 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:45.831088  769388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:45.845534  769388 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:45.845599  769388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:45.853400  769388 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:45.853418  769388 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:45.853490  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:45.862159  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:45.862225  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:45.869960  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:45.877918  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:45.877988  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:45.885657  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.893024  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:45.893088  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:45.900643  769388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:45.908132  769388 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:45.908198  769388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:45.915813  769388 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:45.955846  769388 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:45.955910  769388 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:46.044287  769388 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:46.044366  769388 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:46.044408  769388 kubeadm.go:319] OS: Linux
	I1227 10:01:46.044460  769388 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:46.044514  769388 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:46.044563  769388 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:46.044621  769388 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:46.044672  769388 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:46.044726  769388 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:46.044780  769388 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:46.044831  769388 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:46.044883  769388 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:46.122322  769388 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:46.122522  769388 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:46.122662  769388 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:46.135379  769388 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:46.139129  769388 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:46.139327  769388 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:46.139450  769388 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:46.139598  769388 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:46.139674  769388 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:46.139756  769388 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:46.139815  769388 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:46.139883  769388 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:46.139949  769388 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:46.140059  769388 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:46.140138  769388 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:46.140469  769388 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:46.140529  769388 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:46.278774  769388 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:46.467106  769388 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:46.674089  769388 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:46.962090  769388 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:47.089511  769388 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:47.090121  769388 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:47.094363  769388 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:46.594891  769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000241154s
	I1227 10:01:46.594938  769090 kubeadm.go:319] 
	I1227 10:01:46.595000  769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:01:46.595036  769090 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:01:46.595163  769090 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:01:46.595173  769090 kubeadm.go:319] 
	I1227 10:01:46.595286  769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:01:46.595323  769090 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:01:46.595357  769090 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:01:46.595361  769090 kubeadm.go:319] 
	I1227 10:01:46.600352  769090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:01:46.600807  769090 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:01:46.600916  769090 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:01:46.601157  769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:01:46.601163  769090 kubeadm.go:319] 
	I1227 10:01:46.601232  769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1227 10:01:46.601345  769090 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159617 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000241154s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1227 10:01:46.601418  769090 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1227 10:01:47.049789  769090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 10:01:47.065686  769090 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1227 10:01:47.065751  769090 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1227 10:01:47.078067  769090 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1227 10:01:47.078144  769090 kubeadm.go:158] found existing configuration files:
	
	I1227 10:01:47.078247  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1227 10:01:47.088920  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1227 10:01:47.089035  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1227 10:01:47.101290  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1227 10:01:47.111719  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1227 10:01:47.111783  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1227 10:01:47.119486  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.128720  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1227 10:01:47.128889  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1227 10:01:47.137979  769090 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1227 10:01:47.146623  769090 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1227 10:01:47.146781  769090 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1227 10:01:47.155774  769090 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1227 10:01:47.197997  769090 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1227 10:01:47.198575  769090 kubeadm.go:319] [preflight] Running pre-flight checks
	I1227 10:01:47.334679  769090 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1227 10:01:47.334774  769090 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1227 10:01:47.334814  769090 kubeadm.go:319] OS: Linux
	I1227 10:01:47.334877  769090 kubeadm.go:319] CGROUPS_CPU: enabled
	I1227 10:01:47.334937  769090 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1227 10:01:47.335000  769090 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1227 10:01:47.335065  769090 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1227 10:01:47.335164  769090 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1227 10:01:47.335236  769090 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1227 10:01:47.335294  769090 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1227 10:01:47.335359  769090 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1227 10:01:47.335418  769090 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1227 10:01:47.413630  769090 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1227 10:01:47.413746  769090 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1227 10:01:47.413842  769090 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1227 10:01:47.427809  769090 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1227 10:01:47.431698  769090 out.go:252]   - Generating certificates and keys ...
	I1227 10:01:47.431881  769090 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1227 10:01:47.431951  769090 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1227 10:01:47.432047  769090 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1227 10:01:47.432114  769090 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1227 10:01:47.432211  769090 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1227 10:01:47.432286  769090 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1227 10:01:47.432360  769090 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1227 10:01:47.432432  769090 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1227 10:01:47.432512  769090 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1227 10:01:47.432810  769090 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1227 10:01:47.433140  769090 kubeadm.go:319] [certs] Using the existing "sa" key
	I1227 10:01:47.433248  769090 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1227 10:01:47.584725  769090 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1227 10:01:47.986204  769090 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1227 10:01:47.097843  769388 out.go:252]   - Booting up control plane ...
	I1227 10:01:47.097949  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:47.099592  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:47.099673  769388 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:47.133940  769388 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:47.134045  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:47.147908  769388 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:47.148976  769388 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:47.149327  769388 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:47.321604  769388 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:47.321718  769388 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:01:48.231719  769090 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1227 10:01:48.868258  769090 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1227 10:01:49.097361  769090 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1227 10:01:49.097857  769090 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1227 10:01:49.100455  769090 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1227 10:01:49.104347  769090 out.go:252]   - Booting up control plane ...
	I1227 10:01:49.104456  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1227 10:01:49.104539  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1227 10:01:49.105527  769090 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1227 10:01:49.125548  769090 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1227 10:01:49.125672  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1227 10:01:49.134446  769090 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1227 10:01:49.134626  769090 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1227 10:01:49.134694  769090 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1227 10:01:49.262884  769090 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1227 10:01:49.263010  769090 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1227 10:05:47.321648  769388 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000305874s
	I1227 10:05:47.321690  769388 kubeadm.go:319] 
	I1227 10:05:47.321762  769388 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:05:47.321802  769388 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:05:47.321944  769388 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:05:47.321958  769388 kubeadm.go:319] 
	I1227 10:05:47.322066  769388 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:05:47.322103  769388 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:05:47.322153  769388 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:05:47.322165  769388 kubeadm.go:319] 
	I1227 10:05:47.325886  769388 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:47.326310  769388 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:47.326424  769388 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:47.326663  769388 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1227 10:05:47.326673  769388 kubeadm.go:319] 
	I1227 10:05:47.326742  769388 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:05:47.326828  769388 kubeadm.go:403] duration metric: took 8m5.922999378s to StartCluster
	I1227 10:05:47.326868  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:05:47.326939  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:05:47.362142  769388 cri.go:96] found id: ""
	I1227 10:05:47.362184  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.362193  769388 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:05:47.362200  769388 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:05:47.362260  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:05:47.386992  769388 cri.go:96] found id: ""
	I1227 10:05:47.387017  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.387026  769388 logs.go:284] No container was found matching "etcd"
	I1227 10:05:47.387033  769388 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:05:47.387095  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:05:47.412506  769388 cri.go:96] found id: ""
	I1227 10:05:47.412532  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.412541  769388 logs.go:284] No container was found matching "coredns"
	I1227 10:05:47.412549  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:05:47.412607  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:05:47.440415  769388 cri.go:96] found id: ""
	I1227 10:05:47.440440  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.440449  769388 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:05:47.440456  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:05:47.440515  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:05:47.465494  769388 cri.go:96] found id: ""
	I1227 10:05:47.465522  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.465530  769388 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:05:47.465538  769388 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:05:47.465601  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:05:47.494595  769388 cri.go:96] found id: ""
	I1227 10:05:47.494628  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.494638  769388 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:05:47.494645  769388 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:05:47.494716  769388 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:05:47.523703  769388 cri.go:96] found id: ""
	I1227 10:05:47.523728  769388 logs.go:282] 0 containers: []
	W1227 10:05:47.523736  769388 logs.go:284] No container was found matching "kindnet"
	I1227 10:05:47.523746  769388 logs.go:123] Gathering logs for Docker ...
	I1227 10:05:47.523757  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 10:05:47.546298  769388 logs.go:123] Gathering logs for container status ...
	I1227 10:05:47.546329  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1227 10:05:47.584884  769388 logs.go:123] Gathering logs for kubelet ...
	I1227 10:05:47.584959  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:05:47.653574  769388 logs.go:123] Gathering logs for dmesg ...
	I1227 10:05:47.653612  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:05:47.671978  769388 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:05:47.672006  769388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:05:47.737784  769388 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:05:47.729462    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.730146    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.731816    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.732344    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:47.733957    5502 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1227 10:05:47.737860  769388 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:05:47.737902  769388 out.go:285] * 
	W1227 10:05:47.737955  769388 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.737974  769388 out.go:285] * 
	W1227 10:05:47.738225  769388 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:05:47.743845  769388 out.go:203] 
	W1227 10:05:47.746703  769388 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000305874s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:47.746744  769388 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:05:47.746767  769388 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:05:47.749808  769388 out.go:203] 
	I1227 10:05:49.268185  769090 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001365675s
	I1227 10:05:49.268225  769090 kubeadm.go:319] 
	I1227 10:05:49.268518  769090 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1227 10:05:49.268647  769090 kubeadm.go:319] 	- The kubelet is not running
	I1227 10:05:49.268979  769090 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1227 10:05:49.268993  769090 kubeadm.go:319] 
	I1227 10:05:49.269184  769090 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1227 10:05:49.269240  769090 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1227 10:05:49.269418  769090 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1227 10:05:49.269444  769090 kubeadm.go:319] 
	I1227 10:05:49.270052  769090 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1227 10:05:49.271047  769090 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1227 10:05:49.271268  769090 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1227 10:05:49.272065  769090 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1227 10:05:49.272085  769090 kubeadm.go:319] 
	I1227 10:05:49.272169  769090 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1227 10:05:49.272247  769090 kubeadm.go:403] duration metric: took 8m7.909287482s to StartCluster
	I1227 10:05:49.272292  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1227 10:05:49.272367  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1227 10:05:49.329549  769090 cri.go:96] found id: ""
	I1227 10:05:49.329594  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.329603  769090 logs.go:284] No container was found matching "kube-apiserver"
	I1227 10:05:49.329610  769090 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1227 10:05:49.329676  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1227 10:05:49.386751  769090 cri.go:96] found id: ""
	I1227 10:05:49.386833  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.386845  769090 logs.go:284] No container was found matching "etcd"
	I1227 10:05:49.386854  769090 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1227 10:05:49.386926  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1227 10:05:49.449495  769090 cri.go:96] found id: ""
	I1227 10:05:49.449526  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.449534  769090 logs.go:284] No container was found matching "coredns"
	I1227 10:05:49.449541  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1227 10:05:49.449594  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1227 10:05:49.491413  769090 cri.go:96] found id: ""
	I1227 10:05:49.491448  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.491457  769090 logs.go:284] No container was found matching "kube-scheduler"
	I1227 10:05:49.491463  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1227 10:05:49.491519  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1227 10:05:49.530010  769090 cri.go:96] found id: ""
	I1227 10:05:49.530033  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.530041  769090 logs.go:284] No container was found matching "kube-proxy"
	I1227 10:05:49.530048  769090 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1227 10:05:49.530103  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1227 10:05:49.560962  769090 cri.go:96] found id: ""
	I1227 10:05:49.560990  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.560998  769090 logs.go:284] No container was found matching "kube-controller-manager"
	I1227 10:05:49.561005  769090 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1227 10:05:49.561059  769090 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1227 10:05:49.589143  769090 cri.go:96] found id: ""
	I1227 10:05:49.589164  769090 logs.go:282] 0 containers: []
	W1227 10:05:49.589172  769090 logs.go:284] No container was found matching "kindnet"
	I1227 10:05:49.589183  769090 logs.go:123] Gathering logs for kubelet ...
	I1227 10:05:49.589194  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1227 10:05:49.663786  769090 logs.go:123] Gathering logs for dmesg ...
	I1227 10:05:49.663869  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1227 10:05:49.679647  769090 logs.go:123] Gathering logs for describe nodes ...
	I1227 10:05:49.679678  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1227 10:05:49.789861  769090 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:49.773841    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.774212    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778518    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778831    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.780192    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1227 10:05:49.773841    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.774212    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778518    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.778831    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:49.780192    5467 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1227 10:05:49.789896  769090 logs.go:123] Gathering logs for Docker ...
	I1227 10:05:49.789911  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1227 10:05:49.831802  769090 logs.go:123] Gathering logs for container status ...
	I1227 10:05:49.831872  769090 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1227 10:05:49.871613  769090 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1227 10:05:49.871654  769090 out.go:285] * 
	W1227 10:05:49.871703  769090 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:49.871713  769090 out.go:285] * 
	W1227 10:05:49.871969  769090 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1227 10:05:49.878027  769090 out.go:203] 
	W1227 10:05:49.880016  769090 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001365675s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1227 10:05:49.880062  769090 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1227 10:05:49.880081  769090 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1227 10:05:49.883301  769090 out.go:203] 
	
	
	==> Docker <==
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.351519595Z" level=info msg="Restoring containers: start."
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.367803068Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.387569440Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.590182566Z" level=info msg="Loading containers: done."
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.607645732Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.607701845Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.607738004Z" level=info msg="Initializing buildkit"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.635143730Z" level=info msg="Completed buildkit initialization"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.640674088Z" level=info msg="Daemon has completed initialization"
	Dec 27 09:57:38 force-systemd-env-159617 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.643464690Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.643556774Z" level=info msg="API listen on /run/docker.sock"
	Dec 27 09:57:38 force-systemd-env-159617 dockerd[1140]: time="2025-12-27T09:57:38.643659664Z" level=info msg="API listen on [::]:2376"
	Dec 27 09:57:39 force-systemd-env-159617 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start docker client with request timeout 0s"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Loaded network plugin cni"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Setting cgroupDriver systemd"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 27 09:57:39 force-systemd-env-159617 cri-dockerd[1423]: time="2025-12-27T09:57:39Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 27 09:57:39 force-systemd-env-159617 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1227 10:05:51.594329    5611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:51.595448    5611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:51.597136    5611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:51.597656    5611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1227 10:05:51.599271    5611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.131052] systemd-journald[229]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 08:52] overlayfs: idmapped layers are currently not supported
	[Dec27 08:53] overlayfs: idmapped layers are currently not supported
	[Dec27 08:55] overlayfs: idmapped layers are currently not supported
	[Dec27 08:56] overlayfs: idmapped layers are currently not supported
	[Dec27 09:02] overlayfs: idmapped layers are currently not supported
	[Dec27 09:03] overlayfs: idmapped layers are currently not supported
	[Dec27 09:04] overlayfs: idmapped layers are currently not supported
	[Dec27 09:05] overlayfs: idmapped layers are currently not supported
	[Dec27 09:06] overlayfs: idmapped layers are currently not supported
	[Dec27 09:08] overlayfs: idmapped layers are currently not supported
	[ +24.018537] overlayfs: idmapped layers are currently not supported
	[Dec27 09:09] overlayfs: idmapped layers are currently not supported
	[ +25.285275] overlayfs: idmapped layers are currently not supported
	[Dec27 09:10] overlayfs: idmapped layers are currently not supported
	[ +21.268238] systemd-journald[230]: Failed to send stream file descriptor to service manager: Connection refused
	[Dec27 09:11] overlayfs: idmapped layers are currently not supported
	[  +4.417156] overlayfs: idmapped layers are currently not supported
	[ +35.863671] overlayfs: idmapped layers are currently not supported
	[Dec27 09:12] overlayfs: idmapped layers are currently not supported
	[Dec27 09:13] overlayfs: idmapped layers are currently not supported
	[Dec27 09:14] overlayfs: idmapped layers are currently not supported
	[ +22.811829] overlayfs: idmapped layers are currently not supported
	[Dec27 09:16] overlayfs: idmapped layers are currently not supported
	[Dec27 09:18] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 10:05:51 up  4:48,  0 user,  load average: 1.09, 0.98, 1.70
	Linux force-systemd-env-159617 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 27 10:05:48 force-systemd-env-159617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:49 force-systemd-env-159617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 27 10:05:49 force-systemd-env-159617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:49 force-systemd-env-159617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:49 force-systemd-env-159617 kubelet[5403]: E1227 10:05:49.426897    5403 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:49 force-systemd-env-159617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:49 force-systemd-env-159617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:50 force-systemd-env-159617 kubelet[5484]: E1227 10:05:50.118450    5484 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:50 force-systemd-env-159617 kubelet[5528]: E1227 10:05:50.895150    5528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:50 force-systemd-env-159617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 27 10:05:51 force-systemd-env-159617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 27 10:05:51 force-systemd-env-159617 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:51 force-systemd-env-159617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 27 10:05:51 force-systemd-env-159617 kubelet[5616]: E1227 10:05:51.669452    5616 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 27 10:05:51 force-systemd-env-159617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 27 10:05:51 force-systemd-env-159617 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-159617 -n force-systemd-env-159617
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-159617 -n force-systemd-env-159617: exit status 6 (488.871265ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 10:05:52.331538  782621 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-159617" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-159617" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-159617" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-159617
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-159617: (1.891491722s)
--- FAIL: TestForceSystemdEnv (511.21s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.08
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.17
9 TestDownloadOnly/v1.28.0/DeleteAll 0.37
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.35.0/json-events 3.37
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
22 TestOffline 82.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 137.86
29 TestAddons/serial/Volcano 41.59
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 10.92
35 TestAddons/parallel/Registry 16.42
36 TestAddons/parallel/RegistryCreds 0.69
37 TestAddons/parallel/Ingress 16.67
38 TestAddons/parallel/InspektorGadget 10.84
39 TestAddons/parallel/MetricsServer 5.79
41 TestAddons/parallel/CSI 49.31
42 TestAddons/parallel/Headlamp 16.92
43 TestAddons/parallel/CloudSpanner 5.55
44 TestAddons/parallel/LocalPath 52.87
45 TestAddons/parallel/NvidiaDevicePlugin 5.48
46 TestAddons/parallel/Yakd 11.69
48 TestAddons/StoppedEnableDisable 11.32
49 TestCertOptions 32.76
50 TestCertExpiration 247.3
51 TestDockerFlags 36.2
58 TestErrorSpam/setup 29.14
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.12
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.65
63 TestErrorSpam/stop 11.31
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 66.97
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 36.9
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.79
75 TestFunctional/serial/CacheCmd/cache/add_local 1.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 38.51
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.2
86 TestFunctional/serial/LogsFileCmd 1.22
87 TestFunctional/serial/InvalidService 4.92
89 TestFunctional/parallel/ConfigCmd 0.64
90 TestFunctional/parallel/DashboardCmd 9.34
91 TestFunctional/parallel/DryRun 0.54
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.2
97 TestFunctional/parallel/ServiceCmdConnect 7.69
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 23.16
101 TestFunctional/parallel/SSHCmd 0.86
102 TestFunctional/parallel/CpCmd 2.53
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.11
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
113 TestFunctional/parallel/License 0.44
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 1.21
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
121 TestFunctional/parallel/ImageCommands/Setup 0.68
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
123 TestFunctional/parallel/DockerEnv/bash 1.44
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
133 TestFunctional/parallel/ProfileCmd/profile_list 0.54
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.35
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
148 TestFunctional/parallel/ServiceCmd/List 0.63
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
150 TestFunctional/parallel/MountCmd/any-port 9.43
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
152 TestFunctional/parallel/ServiceCmd/Format 0.46
153 TestFunctional/parallel/ServiceCmd/URL 0.51
154 TestFunctional/parallel/MountCmd/specific-port 2.31
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 193.11
164 TestMultiControlPlane/serial/DeployApp 7.67
165 TestMultiControlPlane/serial/PingHostFromPods 1.99
166 TestMultiControlPlane/serial/AddWorkerNode 35.06
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
169 TestMultiControlPlane/serial/CopyFile 20.38
170 TestMultiControlPlane/serial/StopSecondaryNode 12.07
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
172 TestMultiControlPlane/serial/RestartSecondaryNode 44.2
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 152.02
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.33
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 33.54
178 TestMultiControlPlane/serial/RestartCluster 69.03
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.96
180 TestMultiControlPlane/serial/AddSecondaryNode 85.83
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
184 TestImageBuild/serial/Setup 28.34
185 TestImageBuild/serial/NormalBuild 1.55
186 TestImageBuild/serial/BuildWithBuildArg 0.95
187 TestImageBuild/serial/BuildWithDockerIgnore 0.76
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.01
193 TestJSONOutput/start/Command 67.64
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.66
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.61
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 11.12
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 27.39
219 TestKicCustomNetwork/use_default_bridge_network 29.79
220 TestKicExistingNetwork 30.73
221 TestKicCustomSubnet 29
222 TestKicStaticIP 30.73
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 62.68
227 TestMountStart/serial/StartWithMountFirst 10.29
228 TestMountStart/serial/VerifyMountFirst 0.26
229 TestMountStart/serial/StartWithMountSecond 10.2
230 TestMountStart/serial/VerifyMountSecond 0.26
231 TestMountStart/serial/DeleteFirst 1.57
232 TestMountStart/serial/VerifyMountPostDelete 0.26
233 TestMountStart/serial/Stop 1.29
234 TestMountStart/serial/RestartStopped 8.43
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 85.33
239 TestMultiNode/serial/DeployApp2Nodes 5.58
240 TestMultiNode/serial/PingHostFrom2Pods 1
241 TestMultiNode/serial/AddNode 35.01
242 TestMultiNode/serial/MultiNodeLabels 0.09
243 TestMultiNode/serial/ProfileList 0.69
244 TestMultiNode/serial/CopyFile 10.22
245 TestMultiNode/serial/StopNode 2.39
246 TestMultiNode/serial/StartAfterStop 9.33
247 TestMultiNode/serial/RestartKeepsNodes 79.68
248 TestMultiNode/serial/DeleteNode 5.73
249 TestMultiNode/serial/StopMultiNode 21.89
250 TestMultiNode/serial/RestartMultiNode 54.1
251 TestMultiNode/serial/ValidateNameConflict 32.73
258 TestScheduledStopUnix 101.87
259 TestSkaffold 137.79
261 TestInsufficientStorage 12.87
262 TestRunningBinaryUpgrade 316.24
264 TestKubernetesUpgrade 342.23
265 TestMissingContainerUpgrade 82.08
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
268 TestNoKubernetes/serial/StartWithK8s 37.49
269 TestNoKubernetes/serial/StartWithStopK8s 18.4
270 TestNoKubernetes/serial/Start 8.88
271 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
273 TestNoKubernetes/serial/ProfileList 1.1
274 TestNoKubernetes/serial/Stop 1.3
275 TestNoKubernetes/serial/StartNoArgs 8.2
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
288 TestStoppedBinaryUpgrade/Setup 0.78
289 TestStoppedBinaryUpgrade/Upgrade 336.82
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
298 TestPreload/Start-NoPreload-PullImage 89.12
300 TestPause/serial/Start 69.92
301 TestPreload/Restart-With-Preload-Check-User-Image 52.76
302 TestPause/serial/SecondStartNoReconfiguration 37.76
304 TestNetworkPlugins/group/auto/Start 76.38
305 TestPause/serial/Pause 0.93
306 TestPause/serial/VerifyStatus 0.44
307 TestPause/serial/Unpause 0.71
308 TestPause/serial/PauseAgain 1.06
309 TestPause/serial/DeletePaused 2.53
310 TestPause/serial/VerifyDeletedResources 0.5
311 TestNetworkPlugins/group/flannel/Start 51.68
312 TestNetworkPlugins/group/flannel/ControllerPod 6.01
313 TestNetworkPlugins/group/auto/KubeletFlags 0.33
314 TestNetworkPlugins/group/auto/NetCatPod 10.28
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
316 TestNetworkPlugins/group/flannel/NetCatPod 10.32
317 TestNetworkPlugins/group/auto/DNS 0.22
318 TestNetworkPlugins/group/auto/Localhost 0.18
319 TestNetworkPlugins/group/auto/HairPin 0.17
320 TestNetworkPlugins/group/flannel/DNS 0.27
321 TestNetworkPlugins/group/flannel/Localhost 0.21
322 TestNetworkPlugins/group/flannel/HairPin 0.19
323 TestNetworkPlugins/group/calico/Start 81
324 TestNetworkPlugins/group/custom-flannel/Start 51.81
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.5
327 TestNetworkPlugins/group/custom-flannel/DNS 0.2
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
330 TestNetworkPlugins/group/calico/ControllerPod 6
331 TestNetworkPlugins/group/calico/KubeletFlags 0.4
332 TestNetworkPlugins/group/calico/NetCatPod 12.36
333 TestNetworkPlugins/group/false/Start 77.63
334 TestNetworkPlugins/group/calico/DNS 0.32
335 TestNetworkPlugins/group/calico/Localhost 0.23
336 TestNetworkPlugins/group/calico/HairPin 0.19
337 TestNetworkPlugins/group/kindnet/Start 52.78
338 TestNetworkPlugins/group/false/KubeletFlags 0.32
339 TestNetworkPlugins/group/false/NetCatPod 10.3
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/false/DNS 0.17
342 TestNetworkPlugins/group/false/Localhost 0.15
343 TestNetworkPlugins/group/false/HairPin 0.19
344 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
345 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
346 TestNetworkPlugins/group/kindnet/DNS 0.26
347 TestNetworkPlugins/group/kindnet/Localhost 0.22
348 TestNetworkPlugins/group/kindnet/HairPin 0.21
349 TestNetworkPlugins/group/kubenet/Start 72.28
350 TestNetworkPlugins/group/enable-default-cni/Start 72.03
351 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
352 TestNetworkPlugins/group/kubenet/NetCatPod 10.26
353 TestNetworkPlugins/group/kubenet/DNS 0.2
354 TestNetworkPlugins/group/kubenet/Localhost 0.17
355 TestNetworkPlugins/group/kubenet/HairPin 0.15
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.3
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
361 TestNetworkPlugins/group/bridge/Start 79.01
363 TestStartStop/group/old-k8s-version/serial/FirstStart 93.89
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
365 TestNetworkPlugins/group/bridge/NetCatPod 10.31
366 TestNetworkPlugins/group/bridge/DNS 0.21
367 TestNetworkPlugins/group/bridge/Localhost 0.19
368 TestNetworkPlugins/group/bridge/HairPin 0.17
370 TestStartStop/group/embed-certs/serial/FirstStart 42.61
371 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.36
373 TestStartStop/group/old-k8s-version/serial/Stop 11.54
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
375 TestStartStop/group/old-k8s-version/serial/SecondStart 59.11
376 TestStartStop/group/embed-certs/serial/DeployApp 10.41
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
378 TestStartStop/group/embed-certs/serial/Stop 11.34
379 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
380 TestStartStop/group/embed-certs/serial/SecondStart 53.4
381 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.59
384 TestStartStop/group/old-k8s-version/serial/Pause 3.44
386 TestStartStop/group/no-preload/serial/FirstStart 79.33
387 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
390 TestStartStop/group/embed-certs/serial/Pause 3.92
392 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.52
393 TestStartStop/group/no-preload/serial/DeployApp 10.34
394 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
395 TestStartStop/group/no-preload/serial/Stop 11.35
396 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
397 TestStartStop/group/no-preload/serial/SecondStart 55.11
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
399 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.61
400 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.67
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.01
403 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
404 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
405 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
406 TestStartStop/group/no-preload/serial/Pause 3.11
408 TestStartStop/group/newest-cni/serial/FirstStart 40.04
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.06
413 TestPreload/PreloadSrc/gcs 4.06
414 TestPreload/PreloadSrc/github 4.67
415 TestPreload/PreloadSrc/gcs-cached 0.59
416 TestStartStop/group/newest-cni/serial/DeployApp 0
417 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.63
418 TestStartStop/group/newest-cni/serial/Stop 11.22
419 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
420 TestStartStop/group/newest-cni/serial/SecondStart 17.35
421 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
422 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
423 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
424 TestStartStop/group/newest-cni/serial/Pause 2.94
x
+
TestDownloadOnly/v1.28.0/json-events (5.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-888795 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-888795 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.08155716s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1227 09:18:42.961881  550197 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1227 09:18:42.961960  550197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-888795
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-888795: exit status 85 (172.676569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-888795 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-888795 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:18:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:18:37.934609  550203 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:18:37.934827  550203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:37.934865  550203 out.go:374] Setting ErrFile to fd 2...
	I1227 09:18:37.934886  550203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:37.935305  550203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	W1227 09:18:37.935534  550203 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22343-548332/.minikube/config/config.json: open /home/jenkins/minikube-integration/22343-548332/.minikube/config/config.json: no such file or directory
	I1227 09:18:37.936021  550203 out.go:368] Setting JSON to true
	I1227 09:18:37.936949  550203 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14469,"bootTime":1766812649,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:18:37.937366  550203 start.go:143] virtualization:  
	I1227 09:18:37.943330  550203 out.go:99] [download-only-888795] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1227 09:18:37.943516  550203 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball: no such file or directory
	I1227 09:18:37.943632  550203 notify.go:221] Checking for updates...
	I1227 09:18:37.947562  550203 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:18:37.950826  550203 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:18:37.954057  550203 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:18:37.957341  550203 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:18:37.960528  550203 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:18:37.966648  550203 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:18:37.966942  550203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:18:38.005187  550203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:18:38.005429  550203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:18:38.072170  550203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:18:38.061038736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:18:38.072275  550203 docker.go:319] overlay module found
	I1227 09:18:38.075467  550203 out.go:99] Using the docker driver based on user configuration
	I1227 09:18:38.075511  550203 start.go:309] selected driver: docker
	I1227 09:18:38.075519  550203 start.go:928] validating driver "docker" against <nil>
	I1227 09:18:38.075661  550203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:18:38.130460  550203 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-27 09:18:38.121960101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:18:38.130609  550203 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:18:38.130900  550203 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:18:38.131040  550203 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:18:38.134299  550203 out.go:171] Using Docker driver with root privileges
	I1227 09:18:38.137218  550203 cni.go:84] Creating CNI manager for ""
	I1227 09:18:38.137298  550203 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1227 09:18:38.137313  550203 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1227 09:18:38.137395  550203 start.go:353] cluster config:
	{Name:download-only-888795 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-888795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:18:38.140455  550203 out.go:99] Starting "download-only-888795" primary control-plane node in "download-only-888795" cluster
	I1227 09:18:38.140472  550203 cache.go:134] Beginning downloading kic base image for docker with docker
	I1227 09:18:38.143377  550203 out.go:99] Pulling base image v0.0.48-1766570851-22316 ...
	I1227 09:18:38.143419  550203 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 09:18:38.143511  550203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local docker daemon
	I1227 09:18:38.158629  550203 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:18:38.158815  550203 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a in local cache directory
	I1227 09:18:38.158911  550203 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a to local cache
	I1227 09:18:38.192694  550203 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1227 09:18:38.192730  550203 cache.go:65] Caching tarball of preloaded images
	I1227 09:18:38.192903  550203 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 09:18:38.196211  550203 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1227 09:18:38.196236  550203 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1227 09:18:38.196244  550203 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1227 09:18:38.276743  550203 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1227 09:18:38.276870  550203 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1227 09:18:40.897653  550203 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1227 09:18:40.898048  550203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/download-only-888795/config.json ...
	I1227 09:18:40.898083  550203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/download-only-888795/config.json: {Name:mk68cd6153f2214e66fef251f1fdced9e24ced25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1227 09:18:40.898267  550203 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1227 09:18:40.898456  550203 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22343-548332/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-888795 host does not exist
	  To start a cluster, run: "minikube start -p download-only-888795"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-888795
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-056214 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-056214 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.367405383s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1227 09:18:47.095783  550197 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1227 09:18:47.095818  550197 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-056214
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-056214: exit status 85 (87.876734ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-888795 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-888795 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:18 UTC │
	│ delete  │ -p download-only-888795                                                                                                                                                       │ download-only-888795 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │ 27 Dec 25 09:18 UTC │
	│ start   │ -o=json --download-only -p download-only-056214 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-056214 │ jenkins │ v1.37.0 │ 27 Dec 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/27 09:18:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1227 09:18:43.771189  550405 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:18:43.771321  550405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:43.771332  550405 out.go:374] Setting ErrFile to fd 2...
	I1227 09:18:43.771339  550405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:18:43.771707  550405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:18:43.772175  550405 out.go:368] Setting JSON to true
	I1227 09:18:43.772992  550405 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14475,"bootTime":1766812649,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:18:43.773082  550405 start.go:143] virtualization:  
	I1227 09:18:43.805505  550405 out.go:99] [download-only-056214] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:18:43.805804  550405 notify.go:221] Checking for updates...
	I1227 09:18:43.843160  550405 out.go:171] MINIKUBE_LOCATION=22343
	I1227 09:18:43.867538  550405 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:18:43.900911  550405 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:18:43.932168  550405 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:18:43.966016  550405 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1227 09:18:44.044686  550405 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1227 09:18:44.044974  550405 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:18:44.069921  550405 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:18:44.070033  550405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:18:44.135437  550405 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:18:44.124358214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:18:44.135540  550405 docker.go:319] overlay module found
	I1227 09:18:44.158586  550405 out.go:99] Using the docker driver based on user configuration
	I1227 09:18:44.158656  550405 start.go:309] selected driver: docker
	I1227 09:18:44.158664  550405 start.go:928] validating driver "docker" against <nil>
	I1227 09:18:44.158771  550405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:18:44.229257  550405 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-27 09:18:44.219780445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:18:44.229401  550405 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1227 09:18:44.229651  550405 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1227 09:18:44.229803  550405 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1227 09:18:44.253827  550405 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-056214 host does not exist
	  To start a cluster, run: "minikube start -p download-only-056214"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-056214
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1227 09:18:48.251517  550197 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-042995 --alsologtostderr --binary-mirror http://127.0.0.1:44687 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-042995" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-042995
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (82.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-663445 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-663445 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m20.245804711s)
helpers_test.go:176: Cleaning up "offline-docker-663445" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-663445
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-663445: (2.469861637s)
--- PASS: TestOffline (82.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-071879
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-071879: exit status 85 (69.259076ms)

                                                
                                                
-- stdout --
	* Profile "addons-071879" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-071879"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-071879
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-071879: exit status 85 (76.512106ms)

                                                
                                                
-- stdout --
	* Profile "addons-071879" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-071879"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (137.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-071879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-071879 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.85889375s)
--- PASS: TestAddons/Setup (137.86s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.59s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 34.330855ms
addons_test.go:870: volcano-scheduler stabilized in 34.904318ms
addons_test.go:878: volcano-admission stabilized in 35.26242ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-6c7b5cd66b-sqghf" [8e1b67cb-8b40-4a32-8789-af7392cc7e72] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.009891005s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-7f4844c49c-jfnj8" [ca7d714a-a6b8-4040-afa1-3750358e3e98] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0039781s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-8f57bcd69-rcvqc" [9fabcebf-7b52-40e9-8883-36e55afdc8ad] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004410977s
addons_test.go:905: (dbg) Run:  kubectl --context addons-071879 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-071879 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-071879 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [1da5dad5-78c8-4a46-8b67-2a176f282e87] Pending
helpers_test.go:353: "test-job-nginx-0" [1da5dad5-78c8-4a46-8b67-2a176f282e87] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [1da5dad5-78c8-4a46-8b67-2a176f282e87] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004343852s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable volcano --alsologtostderr -v=1: (11.974902844s)
--- PASS: TestAddons/serial/Volcano (41.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-071879 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-071879 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-071879 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-071879 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a72b60ea-2afe-418c-98c3-e5e7b0f8ced4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a72b60ea-2afe-418c-98c3-e5e7b0f8ced4] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003485229s
addons_test.go:696: (dbg) Run:  kubectl --context addons-071879 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-071879 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-071879 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-071879 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.292842ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-psc5v" [1af0daa4-503f-48c2-933d-9c75e160c701] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009789865s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-6xrpl" [6573a7d1-529a-49a1-9440-8781332dacbc] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003241336s
addons_test.go:394: (dbg) Run:  kubectl --context addons-071879 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-071879 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-071879 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.435255511s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 ip
2025/12/27 09:22:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.43012ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-071879
addons_test.go:334: (dbg) Run:  kubectl --context addons-071879 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-071879 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-071879 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-071879 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [cfe69a91-2d9a-438d-8563-6b1cb335d868] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [cfe69a91-2d9a-438d-8563-6b1cb335d868] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003601878s
I1227 09:22:49.180840  550197 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-071879 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable ingress-dns --alsologtostderr -v=1: (1.152330202s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable ingress --alsologtostderr -v=1: (7.786455885s)
--- PASS: TestAddons/parallel/Ingress (16.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-4m6n5" [1c0e13e1-5cc6-426c-94a9-5ada09f02c78] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009046519s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable inspektor-gadget --alsologtostderr -v=1: (5.834778653s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.542175ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-8vx99" [5a00cfe8-6479-45f9-88b1-2fdf7d85e905] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004176311s
addons_test.go:465: (dbg) Run:  kubectl --context addons-071879 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1227 09:22:25.355717  550197 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1227 09:22:25.390696  550197 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1227 09:22:25.390736  550197 kapi.go:107] duration metric: took 37.825502ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 37.847015ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-071879 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-071879 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [4395e466-9b79-48d1-b7e9-1023e4309187] Pending
helpers_test.go:353: "task-pv-pod" [4395e466-9b79-48d1-b7e9-1023e4309187] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [4395e466-9b79-48d1-b7e9-1023e4309187] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.00383361s
addons_test.go:574: (dbg) Run:  kubectl --context addons-071879 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-071879 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-071879 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-071879 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-071879 delete pod task-pv-pod: (1.203460685s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-071879 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-071879 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-071879 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [c6d4a15e-bb0e-4254-b00c-3b31ad3bafbb] Pending
helpers_test.go:353: "task-pv-pod-restore" [c6d4a15e-bb0e-4254-b00c-3b31ad3bafbb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [c6d4a15e-bb0e-4254-b00c-3b31ad3bafbb] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00368338s
addons_test.go:616: (dbg) Run:  kubectl --context addons-071879 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-071879 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-071879 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.775296028s)
--- PASS: TestAddons/parallel/CSI (49.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-071879 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-mggkq" [8394d6aa-bd28-4392-94c4-60f81190f8da] Pending
helpers_test.go:353: "headlamp-6d8d595f-mggkq" [8394d6aa-bd28-4392-94c4-60f81190f8da] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-mggkq" [8394d6aa-bd28-4392-94c4-60f81190f8da] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003844997s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable headlamp --alsologtostderr -v=1: (6.041127941s)
--- PASS: TestAddons/parallel/Headlamp (16.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-59jnl" [26fde9d3-77eb-48ca-808f-4d8f484f5a8d] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00415247s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-071879 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-071879 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [e0f1f2de-afd3-4b8e-8450-baf433db9919] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [e0f1f2de-afd3-4b8e-8450-baf433db9919] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [e0f1f2de-afd3-4b8e-8450-baf433db9919] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003777948s
addons_test.go:969: (dbg) Run:  kubectl --context addons-071879 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 ssh "cat /opt/local-path-provisioner/pvc-19193416-a1e6-4887-95cd-55f2595a82ea_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-071879 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-071879 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.796283421s)
--- PASS: TestAddons/parallel/LocalPath (52.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-pb87c" [0737ba27-0c76-4872-9f42-8d026c8d1990] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003402005s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-jdvgm" [c9a10f29-5d69-470c-80f0-3a54812047d3] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002827798s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-071879 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-071879 addons disable yakd --alsologtostderr -v=1: (5.686330037s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-071879
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-071879: (11.049402159s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-071879
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-071879
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-071879
--- PASS: TestAddons/StoppedEnableDisable (11.32s)

                                                
                                    
x
+
TestCertOptions (32.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-827619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-827619 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (29.638858652s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-827619 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-827619 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-827619 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-827619" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-827619
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-827619: (2.389575952s)
--- PASS: TestCertOptions (32.76s)

                                                
                                    
x
+
TestCertExpiration (247.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-673578 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1227 10:05:56.548395  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:06:00.727460  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:06:06.959052  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-673578 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (35.531367351s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-673578 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-673578 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.363571009s)
helpers_test.go:176: Cleaning up "cert-expiration-673578" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-673578
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-673578: (2.402194471s)
--- PASS: TestCertExpiration (247.30s)

                                                
                                    
x
+
TestDockerFlags (36.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-819560 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-819560 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.871482648s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-819560 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-819560 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-819560" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-819560
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-819560: (2.569831858s)
--- PASS: TestDockerFlags (36.20s)

                                                
                                    
x
+
TestErrorSpam/setup (29.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-951904 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-951904 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-951904 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-951904 --driver=docker  --container-runtime=docker: (29.139714784s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (29.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (11.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 stop: (11.099212034s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-951904 --log_dir /tmp/nospam-951904 stop
--- PASS: TestErrorSpam/stop (11.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22343-548332/.minikube/files/etc/test/nested/copy/550197/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1227 09:26:06.970270  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:06.976134  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:06.986488  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:07.006891  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:07.047218  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:07.127626  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:07.288036  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:07.608653  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:08.249572  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:09.530065  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:12.091723  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-918607 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.97021872s)
--- PASS: TestFunctional/serial/StartWithProxy (66.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1227 09:26:15.770119  550197 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --alsologtostderr -v=8
E1227 09:26:17.213046  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:27.453510  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:26:47.934555  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-918607 --alsologtostderr -v=8: (36.896418814s)
functional_test.go:678: soft start took 36.899225247s for "functional-918607" cluster.
I1227 09:26:52.666891  550197 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (36.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-918607 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-918607 /tmp/TestFunctionalserialCacheCmdcacheadd_local2974606016/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache add minikube-local-cache-test:functional-918607
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache delete minikube-local-cache-test:functional-918607
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-918607
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.368881ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 kubectl -- --context functional-918607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-918607 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1227 09:27:28.895315  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-918607 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.514436851s)
functional_test.go:776: restart took 38.514523634s for "functional-918607" cluster.
I1227 09:27:37.443937  550197 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (38.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-918607 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-918607 logs: (1.19780146s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 logs --file /tmp/TestFunctionalserialLogsFileCmd1414144318/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-918607 logs --file /tmp/TestFunctionalserialLogsFileCmd1414144318/001/logs.txt: (1.222776121s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-918607 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-918607
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-918607: exit status 115 (731.130882ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30999 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-918607 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 config get cpus: exit status 14 (114.340591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 config get cpus: exit status 14 (93.941409ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-918607 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-918607 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 591490: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918607 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (263.758226ms)

                                                
                                                
-- stdout --
	* [functional-918607] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:28:16.703332  590201 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:28:16.703494  590201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:28:16.703505  590201 out.go:374] Setting ErrFile to fd 2...
	I1227 09:28:16.703511  590201 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:28:16.703798  590201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:28:16.704207  590201 out.go:368] Setting JSON to false
	I1227 09:28:16.705291  590201 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15048,"bootTime":1766812649,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:28:16.705362  590201 start.go:143] virtualization:  
	I1227 09:28:16.708524  590201 out.go:179] * [functional-918607] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1227 09:28:16.712142  590201 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:28:16.712194  590201 notify.go:221] Checking for updates...
	I1227 09:28:16.715055  590201 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:28:16.717912  590201 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:28:16.720748  590201 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:28:16.724457  590201 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:28:16.727811  590201 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:28:16.731696  590201 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:28:16.732270  590201 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:28:16.780507  590201 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:28:16.780624  590201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:28:16.885970  590201 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:28:16.875703187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:28:16.886074  590201 docker.go:319] overlay module found
	I1227 09:28:16.889093  590201 out.go:179] * Using the docker driver based on existing profile
	I1227 09:28:16.891960  590201 start.go:309] selected driver: docker
	I1227 09:28:16.891985  590201 start.go:928] validating driver "docker" against &{Name:functional-918607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-918607 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:28:16.892103  590201 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:28:16.895568  590201 out.go:203] 
	W1227 09:28:16.898384  590201 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1227 09:28:16.901158  590201 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-918607 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-918607 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (207.538698ms)

                                                
                                                
-- stdout --
	* [functional-918607] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:28:20.382739  591308 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:28:20.382914  591308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:28:20.382948  591308 out.go:374] Setting ErrFile to fd 2...
	I1227 09:28:20.382970  591308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:28:20.383547  591308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:28:20.383995  591308 out.go:368] Setting JSON to false
	I1227 09:28:20.384983  591308 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15052,"bootTime":1766812649,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1227 09:28:20.385082  591308 start.go:143] virtualization:  
	I1227 09:28:20.388359  591308 out.go:179] * [functional-918607] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1227 09:28:20.392330  591308 notify.go:221] Checking for updates...
	I1227 09:28:20.393191  591308 out.go:179]   - MINIKUBE_LOCATION=22343
	I1227 09:28:20.396282  591308 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1227 09:28:20.399167  591308 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	I1227 09:28:20.401988  591308 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	I1227 09:28:20.404911  591308 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1227 09:28:20.407845  591308 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1227 09:28:20.411265  591308 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:28:20.411842  591308 driver.go:422] Setting default libvirt URI to qemu:///system
	I1227 09:28:20.449588  591308 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1227 09:28:20.449689  591308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:28:20.518734  591308 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-27 09:28:20.508821562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:28:20.518838  591308 docker.go:319] overlay module found
	I1227 09:28:20.522137  591308 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1227 09:28:20.524900  591308 start.go:309] selected driver: docker
	I1227 09:28:20.524918  591308 start.go:928] validating driver "docker" against &{Name:functional-918607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766570851-22316@sha256:7975a7a1117280f99ad7696c9c80bdca993064fe9e309e9984685e0ce989758a Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-918607 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1227 09:28:20.525017  591308 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1227 09:28:20.528480  591308 out.go:203] 
	W1227 09:28:20.531318  591308 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1227 09:28:20.534089  591308 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-918607 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-918607 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-72jr2" [fc7f4c2c-1b59-4583-a82f-dc3b87d86ca2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-72jr2" [fc7f4c2c-1b59-4583-a82f-dc3b87d86ca2] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003542702s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30532
functional_test.go:1685: http://192.168.49.2:30532: success! body:
Request served by hello-node-connect-5d95464fd4-72jr2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30532
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7caf5c2a-f286-4fae-bccb-cafbf105a2d5] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004065421s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-918607 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-918607 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-918607 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-918607 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [48bb76c0-5365-4b07-b8e6-27e22aab515b] Pending
helpers_test.go:353: "sp-pod" [48bb76c0-5365-4b07-b8e6-27e22aab515b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [48bb76c0-5365-4b07-b8e6-27e22aab515b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00375283s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-918607 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-918607 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-918607 delete -f testdata/storage-provisioner/pod.yaml: (1.098501284s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-918607 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [23bd9e96-5e72-4772-b30f-54a5f93976c9] Pending
helpers_test.go:353: "sp-pod" [23bd9e96-5e72-4772-b30f-54a5f93976c9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004093525s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-918607 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh -n functional-918607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cp functional-918607:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3894862897/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh -n functional-918607 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh -n functional-918607 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/550197/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /etc/test/nested/copy/550197/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/550197.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /etc/ssl/certs/550197.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/550197.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /usr/share/ca-certificates/550197.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/5501972.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /etc/ssl/certs/5501972.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/5501972.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /usr/share/ca-certificates/5501972.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-918607 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh "sudo systemctl is-active crio": exit status 1 (412.199588ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-918607 version -o=json --components: (1.213567074s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918607 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-918607
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918607 image ls --format short --alsologtostderr:
I1227 09:28:31.381541  593256 out.go:360] Setting OutFile to fd 1 ...
I1227 09:28:31.381703  593256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.381724  593256 out.go:374] Setting ErrFile to fd 2...
I1227 09:28:31.381744  593256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.382121  593256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:28:31.382936  593256 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.383164  593256 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.383800  593256 cli_runner.go:164] Run: docker container inspect functional-918607 --format={{.State.Status}}
I1227 09:28:31.408609  593256 ssh_runner.go:195] Run: systemctl --version
I1227 09:28:31.408659  593256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918607
I1227 09:28:31.443444  593256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/functional-918607/id_rsa Username:docker}
I1227 09:28:31.541910  593256 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918607 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 962dbbc0e55ec │ 53.7MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-918607 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ docker.io/library/minikube-local-cache-test       │ functional-918607 │ 5b5151ffa8a3f │ 30B    │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918607 image ls --format table --alsologtostderr:
I1227 09:28:31.965180  593440 out.go:360] Setting OutFile to fd 1 ...
I1227 09:28:31.965307  593440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.965316  593440 out.go:374] Setting ErrFile to fd 2...
I1227 09:28:31.965322  593440 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.965610  593440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:28:31.966242  593440 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.966363  593440 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.966901  593440 cli_runner.go:164] Run: docker container inspect functional-918607 --format={{.State.Status}}
I1227 09:28:31.987840  593440 ssh_runner.go:195] Run: systemctl --version
I1227 09:28:31.987893  593440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918607
I1227 09:28:32.021062  593440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/functional-918607/id_rsa Username:docker}
I1227 09:28:32.142752  593440 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918607 image ls --format json --alsologtostderr:
[{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"5b5151ffa8a3f4959da96272192085ce7f8d0d2f025d4e5b728731bfc53d4bb4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-918607"],"size":"30"},{"id":"88898f1d1a62a3ea9db5d4d099dee
7aa52ebe8191016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":
"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918607 image ls --format json --alsologtostderr:
I1227 09:28:31.690416  593350 out.go:360] Setting OutFile to fd 1 ...
I1227 09:28:31.690610  593350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.690624  593350 out.go:374] Setting ErrFile to fd 2...
I1227 09:28:31.690630  593350 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.690929  593350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:28:31.691713  593350 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.691900  593350 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.692541  593350 cli_runner.go:164] Run: docker container inspect functional-918607 --format={{.State.Status}}
I1227 09:28:31.719865  593350 ssh_runner.go:195] Run: systemctl --version
I1227 09:28:31.719920  593350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918607
I1227 09:28:31.745542  593350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/functional-918607/id_rsa Username:docker}
I1227 09:28:31.859200  593350 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-918607 image ls --format yaml --alsologtostderr:
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5b5151ffa8a3f4959da96272192085ce7f8d0d2f025d4e5b728731bfc53d4bb4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-918607
size: "30"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918607 image ls --format yaml --alsologtostderr:
I1227 09:28:31.436164  593278 out.go:360] Setting OutFile to fd 1 ...
I1227 09:28:31.436330  593278 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.436343  593278 out.go:374] Setting ErrFile to fd 2...
I1227 09:28:31.436349  593278 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:31.436617  593278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:28:31.437225  593278 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.437352  593278 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:31.437851  593278 cli_runner.go:164] Run: docker container inspect functional-918607 --format={{.State.Status}}
I1227 09:28:31.458826  593278 ssh_runner.go:195] Run: systemctl --version
I1227 09:28:31.458892  593278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918607
I1227 09:28:31.483862  593278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/functional-918607/id_rsa Username:docker}
I1227 09:28:31.588714  593278 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh pgrep buildkitd: exit status 1 (363.062081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image build -t localhost/my-image:functional-918607 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-918607 image build -t localhost/my-image:functional-918607 testdata/build --alsologtostderr: (2.94943836s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-918607 image build -t localhost/my-image:functional-918607 testdata/build --alsologtostderr:
I1227 09:28:32.023644  593445 out.go:360] Setting OutFile to fd 1 ...
I1227 09:28:32.024359  593445 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:32.024369  593445 out.go:374] Setting ErrFile to fd 2...
I1227 09:28:32.024375  593445 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1227 09:28:32.024635  593445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
I1227 09:28:32.025293  593445 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:32.026584  593445 config.go:182] Loaded profile config "functional-918607": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1227 09:28:32.027233  593445 cli_runner.go:164] Run: docker container inspect functional-918607 --format={{.State.Status}}
I1227 09:28:32.047620  593445 ssh_runner.go:195] Run: systemctl --version
I1227 09:28:32.047675  593445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-918607
I1227 09:28:32.078246  593445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33508 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/functional-918607/id_rsa Username:docker}
I1227 09:28:32.218266  593445 build_images.go:162] Building image from path: /tmp/build.2963258708.tar
I1227 09:28:32.218366  593445 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1227 09:28:32.228350  593445 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2963258708.tar
I1227 09:28:32.232315  593445 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2963258708.tar: stat -c "%s %y" /var/lib/minikube/build/build.2963258708.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2963258708.tar': No such file or directory
I1227 09:28:32.232348  593445 ssh_runner.go:362] scp /tmp/build.2963258708.tar --> /var/lib/minikube/build/build.2963258708.tar (3072 bytes)
I1227 09:28:32.254313  593445 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2963258708
I1227 09:28:32.262322  593445 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2963258708 -xf /var/lib/minikube/build/build.2963258708.tar
I1227 09:28:32.270744  593445 docker.go:364] Building image: /var/lib/minikube/build/build.2963258708
I1227 09:28:32.270823  593445 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-918607 /var/lib/minikube/build/build.2963258708
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:cc2705714068d6003d8b85a3b124d9ab7251134f91823f9a93dd837fe2e98768 done
#8 naming to localhost/my-image:functional-918607 done
#8 DONE 0.1s
I1227 09:28:34.866371  593445 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-918607 /var/lib/minikube/build/build.2963258708: (2.595521665s)
I1227 09:28:34.866463  593445 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2963258708
I1227 09:28:34.874273  593445 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2963258708.tar
I1227 09:28:34.881683  593445 build_images.go:218] Built localhost/my-image:functional-918607 from /tmp/build.2963258708.tar
I1227 09:28:34.881712  593445 build_images.go:134] succeeded building to: functional-918607
I1227 09:28:34.881718  593445 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918607 docker-env) && out/minikube-linux-arm64 status -p functional-918607"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-918607 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "459.853862ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "80.386618ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "523.019922ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "69.195366ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 588685: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-918607 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [bd4f7484-c067-44cf-9533-9d49c4db4b16] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [bd4f7484-c067-44cf-9533-9d49c4db4b16] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004053852s
I1227 09:28:02.061100  550197 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-918607 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.51.161 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-918607 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-918607 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-918607 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-8lrlt" [730b7ebb-96bc-4cb2-8483-4d27772e5e24] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-8lrlt" [730b7ebb-96bc-4cb2-8483-4d27772e5e24] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003661049s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service list -o json
functional_test.go:1509: Took "678.187318ms" to run "out/minikube-linux-arm64 -p functional-918607 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdany-port761547549/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766827697191371053" to /tmp/TestFunctionalparallelMountCmdany-port761547549/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766827697191371053" to /tmp/TestFunctionalparallelMountCmdany-port761547549/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766827697191371053" to /tmp/TestFunctionalparallelMountCmdany-port761547549/001/test-1766827697191371053
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (495.63791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:28:17.688103  550197 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 27 09:28 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 27 09:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 27 09:28 test-1766827697191371053
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh cat /mount-9p/test-1766827697191371053
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-918607 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [96543588-d055-4923-b5a4-b022ccb1c367] Pending
helpers_test.go:353: "busybox-mount" [96543588-d055-4923-b5a4-b022ccb1c367] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [96543588-d055-4923-b5a4-b022ccb1c367] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [96543588-d055-4923-b5a4-b022ccb1c367] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002894944s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-918607 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdany-port761547549/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30645
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30645
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdspecific-port2428444409/001:/mount-9p --alsologtostderr -v=1 --port 35277]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (530.581659ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1227 09:28:27.153498  550197 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdspecific-port2428444409/001:/mount-9p --alsologtostderr -v=1 --port 35277] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh "sudo umount -f /mount-9p": exit status 1 (293.830225ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-918607 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdspecific-port2428444409/001:/mount-9p --alsologtostderr -v=1 --port 35277] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T" /mount1: exit status 1 (639.551223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2025/12/27 09:28:29 [DEBUG] GET http://127.0.0.1:38523/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-918607 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-918607 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-918607 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3157213071/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-918607
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-918607
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-918607
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1227 09:28:50.816195  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:31:06.959352  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:31:34.656410  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (3m12.164939848s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (193.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 kubectl -- rollout status deployment/busybox: (4.718754123s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-chdh8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-pftnf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-srpnh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-chdh8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-pftnf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-srpnh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-chdh8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-pftnf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-srpnh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-chdh8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-chdh8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-pftnf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-pftnf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-srpnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 kubectl -- exec busybox-769dd8b7dd-srpnh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 node add --alsologtostderr -v 5: (34.020130656s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5: (1.041504304s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-820364 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.075633528s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 status --output json --alsologtostderr -v 5: (1.038111739s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp testdata/cp-test.txt ha-820364:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948318771/001/cp-test_ha-820364.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364:/home/docker/cp-test.txt ha-820364-m02:/home/docker/cp-test_ha-820364_ha-820364-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test_ha-820364_ha-820364-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364:/home/docker/cp-test.txt ha-820364-m03:/home/docker/cp-test_ha-820364_ha-820364-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test_ha-820364_ha-820364-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364:/home/docker/cp-test.txt ha-820364-m04:/home/docker/cp-test_ha-820364_ha-820364-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test_ha-820364_ha-820364-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp testdata/cp-test.txt ha-820364-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948318771/001/cp-test_ha-820364-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m02:/home/docker/cp-test.txt ha-820364:/home/docker/cp-test_ha-820364-m02_ha-820364.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test_ha-820364-m02_ha-820364.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m02:/home/docker/cp-test.txt ha-820364-m03:/home/docker/cp-test_ha-820364-m02_ha-820364-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test_ha-820364-m02_ha-820364-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m02:/home/docker/cp-test.txt ha-820364-m04:/home/docker/cp-test_ha-820364-m02_ha-820364-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test_ha-820364-m02_ha-820364-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp testdata/cp-test.txt ha-820364-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948318771/001/cp-test_ha-820364-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m03:/home/docker/cp-test.txt ha-820364:/home/docker/cp-test_ha-820364-m03_ha-820364.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test_ha-820364-m03_ha-820364.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m03:/home/docker/cp-test.txt ha-820364-m02:/home/docker/cp-test_ha-820364-m03_ha-820364-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test_ha-820364-m03_ha-820364-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m03:/home/docker/cp-test.txt ha-820364-m04:/home/docker/cp-test_ha-820364-m03_ha-820364-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test_ha-820364-m03_ha-820364-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp testdata/cp-test.txt ha-820364-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile948318771/001/cp-test_ha-820364-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test.txt"
E1227 09:32:53.493764  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:53.499034  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:53.515591  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:53.536725  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:53.577021  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m04:/home/docker/cp-test.txt ha-820364:/home/docker/cp-test_ha-820364-m04_ha-820364.txt
E1227 09:32:53.657842  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:32:53.819777  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test.txt"
E1227 09:32:54.140871  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364 "sudo cat /home/docker/cp-test_ha-820364-m04_ha-820364.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m04:/home/docker/cp-test.txt ha-820364-m02:/home/docker/cp-test_ha-820364-m04_ha-820364-m02.txt
E1227 09:32:54.781176  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m02 "sudo cat /home/docker/cp-test_ha-820364-m04_ha-820364-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 cp ha-820364-m04:/home/docker/cp-test.txt ha-820364-m03:/home/docker/cp-test_ha-820364-m04_ha-820364-m03.txt
E1227 09:32:56.062074  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 ssh -n ha-820364-m03 "sudo cat /home/docker/cp-test_ha-820364-m04_ha-820364-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node stop m02 --alsologtostderr -v 5
E1227 09:32:58.622824  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:33:03.743889  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 node stop m02 --alsologtostderr -v 5: (11.278643939s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5: exit status 7 (792.321611ms)

                                                
                                                
-- stdout --
	ha-820364
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820364-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820364-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-820364-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:33:08.399234  615481 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:33:08.399534  615481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:08.399564  615481 out.go:374] Setting ErrFile to fd 2...
	I1227 09:33:08.399586  615481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:33:08.400019  615481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:33:08.400315  615481 out.go:368] Setting JSON to false
	I1227 09:33:08.400371  615481 mustload.go:66] Loading cluster: ha-820364
	I1227 09:33:08.401085  615481 config.go:182] Loaded profile config "ha-820364": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:33:08.401124  615481 status.go:174] checking status of ha-820364 ...
	I1227 09:33:08.401871  615481 cli_runner.go:164] Run: docker container inspect ha-820364 --format={{.State.Status}}
	I1227 09:33:08.402442  615481 notify.go:221] Checking for updates...
	I1227 09:33:08.424392  615481 status.go:371] ha-820364 host status = "Running" (err=<nil>)
	I1227 09:33:08.424413  615481 host.go:66] Checking if "ha-820364" exists ...
	I1227 09:33:08.424702  615481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820364
	I1227 09:33:08.456286  615481 host.go:66] Checking if "ha-820364" exists ...
	I1227 09:33:08.456686  615481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:33:08.456752  615481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820364
	I1227 09:33:08.476885  615481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33513 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/ha-820364/id_rsa Username:docker}
	I1227 09:33:08.584897  615481 ssh_runner.go:195] Run: systemctl --version
	I1227 09:33:08.593176  615481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:33:08.608359  615481 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:33:08.670229  615481 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-27 09:33:08.659520417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:33:08.670908  615481 kubeconfig.go:125] found "ha-820364" server: "https://192.168.49.254:8443"
	I1227 09:33:08.670941  615481 api_server.go:166] Checking apiserver status ...
	I1227 09:33:08.670993  615481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:33:08.688745  615481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2109/cgroup
	I1227 09:33:08.698134  615481 api_server.go:192] apiserver freezer: "8:freezer:/docker/32d6365f416877ca258570d513784670e5db8e5c4e547e877be40978da9d0c0a/kubepods/burstable/pod37233e3d16750d1664940f88e90f7abf/280400da6c5a0ba64f9641df416e32bf34d821cff48a49d4a4ad3247f232daa6"
	I1227 09:33:08.698201  615481 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/32d6365f416877ca258570d513784670e5db8e5c4e547e877be40978da9d0c0a/kubepods/burstable/pod37233e3d16750d1664940f88e90f7abf/280400da6c5a0ba64f9641df416e32bf34d821cff48a49d4a4ad3247f232daa6/freezer.state
	I1227 09:33:08.706346  615481 api_server.go:214] freezer state: "THAWED"
	I1227 09:33:08.706389  615481 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:33:08.714590  615481 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:33:08.714621  615481 status.go:463] ha-820364 apiserver status = Running (err=<nil>)
	I1227 09:33:08.714632  615481 status.go:176] ha-820364 status: &{Name:ha-820364 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:33:08.714683  615481 status.go:174] checking status of ha-820364-m02 ...
	I1227 09:33:08.715008  615481 cli_runner.go:164] Run: docker container inspect ha-820364-m02 --format={{.State.Status}}
	I1227 09:33:08.732262  615481 status.go:371] ha-820364-m02 host status = "Stopped" (err=<nil>)
	I1227 09:33:08.732436  615481 status.go:384] host is not running, skipping remaining checks
	I1227 09:33:08.732445  615481 status.go:176] ha-820364-m02 status: &{Name:ha-820364-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:33:08.732479  615481 status.go:174] checking status of ha-820364-m03 ...
	I1227 09:33:08.732820  615481 cli_runner.go:164] Run: docker container inspect ha-820364-m03 --format={{.State.Status}}
	I1227 09:33:08.751744  615481 status.go:371] ha-820364-m03 host status = "Running" (err=<nil>)
	I1227 09:33:08.751769  615481 host.go:66] Checking if "ha-820364-m03" exists ...
	I1227 09:33:08.753399  615481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820364-m03
	I1227 09:33:08.771326  615481 host.go:66] Checking if "ha-820364-m03" exists ...
	I1227 09:33:08.771648  615481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:33:08.771691  615481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820364-m03
	I1227 09:33:08.789194  615481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/ha-820364-m03/id_rsa Username:docker}
	I1227 09:33:08.902461  615481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:33:08.918216  615481 kubeconfig.go:125] found "ha-820364" server: "https://192.168.49.254:8443"
	I1227 09:33:08.918246  615481 api_server.go:166] Checking apiserver status ...
	I1227 09:33:08.918288  615481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:33:08.931780  615481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I1227 09:33:08.940134  615481 api_server.go:192] apiserver freezer: "8:freezer:/docker/479ad4c858a823fb9dd9677e98182a6a9fff4d38fd2d701bf54678a1c8d4a690/kubepods/burstable/pod58640735709f9505bf6de052be585222/a7bb3e56ef53725ad115271e985cbebf417569d6021626b86b2fd463c8d90aac"
	I1227 09:33:08.940204  615481 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/479ad4c858a823fb9dd9677e98182a6a9fff4d38fd2d701bf54678a1c8d4a690/kubepods/burstable/pod58640735709f9505bf6de052be585222/a7bb3e56ef53725ad115271e985cbebf417569d6021626b86b2fd463c8d90aac/freezer.state
	I1227 09:33:08.947990  615481 api_server.go:214] freezer state: "THAWED"
	I1227 09:33:08.948019  615481 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1227 09:33:08.956821  615481 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1227 09:33:08.956892  615481 status.go:463] ha-820364-m03 apiserver status = Running (err=<nil>)
	I1227 09:33:08.956931  615481 status.go:176] ha-820364-m03 status: &{Name:ha-820364-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:33:08.956966  615481 status.go:174] checking status of ha-820364-m04 ...
	I1227 09:33:08.957319  615481 cli_runner.go:164] Run: docker container inspect ha-820364-m04 --format={{.State.Status}}
	I1227 09:33:08.976874  615481 status.go:371] ha-820364-m04 host status = "Running" (err=<nil>)
	I1227 09:33:08.976895  615481 host.go:66] Checking if "ha-820364-m04" exists ...
	I1227 09:33:08.977181  615481 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-820364-m04
	I1227 09:33:08.994450  615481 host.go:66] Checking if "ha-820364-m04" exists ...
	I1227 09:33:08.994739  615481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:33:08.994787  615481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-820364-m04
	I1227 09:33:09.014616  615481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/ha-820364-m04/id_rsa Username:docker}
	I1227 09:33:09.112253  615481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:33:09.132393  615481 status.go:176] ha-820364-m04 status: &{Name:ha-820364-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node start m02 --alsologtostderr -v 5
E1227 09:33:13.984158  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:33:34.464609  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 node start m02 --alsologtostderr -v 5: (42.912921437s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5: (1.190123439s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.079109748s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 stop --alsologtostderr -v 5
E1227 09:34:15.425375  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 stop --alsologtostderr -v 5: (35.140846645s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 start --wait true --alsologtostderr -v 5
E1227 09:35:37.346466  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 09:36:06.959200  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 start --wait true --alsologtostderr -v 5: (1m56.71709405s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 node delete m03 --alsologtostderr -v 5: (10.308057924s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 stop --alsologtostderr -v 5: (33.414503044s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5: exit status 7 (127.514865ms)

                                                
                                                
-- stdout --
	ha-820364
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820364-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-820364-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:37:12.874087  642563 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:37:12.874309  642563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:12.874347  642563 out.go:374] Setting ErrFile to fd 2...
	I1227 09:37:12.874367  642563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:37:12.874756  642563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:37:12.875031  642563 out.go:368] Setting JSON to false
	I1227 09:37:12.875093  642563 mustload.go:66] Loading cluster: ha-820364
	I1227 09:37:12.875831  642563 config.go:182] Loaded profile config "ha-820364": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:37:12.875874  642563 status.go:174] checking status of ha-820364 ...
	I1227 09:37:12.876653  642563 cli_runner.go:164] Run: docker container inspect ha-820364 --format={{.State.Status}}
	I1227 09:37:12.877098  642563 notify.go:221] Checking for updates...
	I1227 09:37:12.901012  642563 status.go:371] ha-820364 host status = "Stopped" (err=<nil>)
	I1227 09:37:12.901037  642563 status.go:384] host is not running, skipping remaining checks
	I1227 09:37:12.901044  642563 status.go:176] ha-820364 status: &{Name:ha-820364 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:37:12.901075  642563 status.go:174] checking status of ha-820364-m02 ...
	I1227 09:37:12.901379  642563 cli_runner.go:164] Run: docker container inspect ha-820364-m02 --format={{.State.Status}}
	I1227 09:37:12.933366  642563 status.go:371] ha-820364-m02 host status = "Stopped" (err=<nil>)
	I1227 09:37:12.933389  642563 status.go:384] host is not running, skipping remaining checks
	I1227 09:37:12.933396  642563 status.go:176] ha-820364-m02 status: &{Name:ha-820364-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:37:12.933414  642563 status.go:174] checking status of ha-820364-m04 ...
	I1227 09:37:12.933721  642563 cli_runner.go:164] Run: docker container inspect ha-820364-m04 --format={{.State.Status}}
	I1227 09:37:12.954993  642563 status.go:371] ha-820364-m04 host status = "Stopped" (err=<nil>)
	I1227 09:37:12.955019  642563 status.go:384] host is not running, skipping remaining checks
	I1227 09:37:12.955026  642563 status.go:176] ha-820364-m04 status: &{Name:ha-820364-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (69.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1227 09:37:53.494071  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m7.841881957s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
E1227 09:38:21.187446  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (69.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 node add --control-plane --alsologtostderr -v 5: (1m24.72199461s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-820364 status --alsologtostderr -v 5: (1.104145305s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.032578505s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-160022 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-160022 --driver=docker  --container-runtime=docker: (28.343126375s)
--- PASS: TestImageBuild/serial/Setup (28.34s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-160022
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-160022: (1.546648739s)
--- PASS: TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-160022
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-160022
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-160022
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-160022: (1.011889696s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-464936 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1227 09:41:06.959031  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-464936 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m7.635387701s)
--- PASS: TestJSONOutput/start/Command (67.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-464936 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-464936 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-464936 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-464936 --output=json --user=testUser: (11.124045296s)
--- PASS: TestJSONOutput/stop/Command (11.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-689070 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-689070 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.930478ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"22d78086-7562-40e8-8009-58d16a22a809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-689070] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8629af2e-6095-4590-b3e9-07e13634a1ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"ec94e304-1f4d-4b19-87d8-2b08dbd8442a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dab7d24d-6138-4485-8cb2-78d73b52f003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig"}}
	{"specversion":"1.0","id":"15b78cf6-9e49-4a62-a2d0-3dc745145451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube"}}
	{"specversion":"1.0","id":"e115cab5-07d1-4d44-96cb-a71ce6338831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1ae6edae-8fa7-4c75-ab5e-86608632005d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e9759322-bb96-455b-8c33-cbe7296d45a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-689070" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-689070
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-177382 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-177382 --network=: (25.063112971s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-177382" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-177382
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-177382: (2.294935845s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.39s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-082251 --network=bridge
E1227 09:42:30.016782  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-082251 --network=bridge: (27.599348658s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-082251" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-082251
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-082251: (2.155795319s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.79s)

                                                
                                    
x
+
TestKicExistingNetwork (30.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1227 09:42:52.577030  550197 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1227 09:42:52.592388  550197 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1227 09:42:52.592473  550197 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1227 09:42:52.592493  550197 cli_runner.go:164] Run: docker network inspect existing-network
W1227 09:42:52.608479  550197 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1227 09:42:52.608506  550197 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1227 09:42:52.608522  550197 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1227 09:42:52.608624  550197 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1227 09:42:52.624300  550197 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e355fd7f0d9b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:60:87:03:40:b8} reservation:<nil>}
I1227 09:42:52.624571  550197 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001861270}
I1227 09:42:52.624604  550197 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1227 09:42:52.624655  550197 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1227 09:42:52.684579  550197 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-605692 --network=existing-network
E1227 09:42:53.494635  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-605692 --network=existing-network: (28.492764315s)
helpers_test.go:176: Cleaning up "existing-network-605692" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-605692
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-605692: (2.099823239s)
I1227 09:43:23.292898  550197 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.73s)

                                                
                                    
x
+
TestKicCustomSubnet (29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-382665 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-382665 --subnet=192.168.60.0/24: (26.832584871s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-382665 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-382665" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-382665
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-382665: (2.135285815s)
--- PASS: TestKicCustomSubnet (29.00s)

                                                
                                    
x
+
TestKicStaticIP (30.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-023423 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-023423 --static-ip=192.168.200.200: (28.321123855s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-023423 ip
helpers_test.go:176: Cleaning up "static-ip-023423" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-023423
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-023423: (2.23781585s)
--- PASS: TestKicStaticIP (30.73s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-904222 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-904222 --driver=docker  --container-runtime=docker: (27.50551361s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-906995 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-906995 --driver=docker  --container-runtime=docker: (29.382491853s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-904222
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-906995
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-906995" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-906995
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-906995: (2.174665024s)
helpers_test.go:176: Cleaning up "first-904222" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-904222
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-904222: (2.171313135s)
--- PASS: TestMinikubeProfile (62.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-241439 --memory=3072 --mount-string /tmp/TestMountStartserial3383084788/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-241439 --memory=3072 --mount-string /tmp/TestMountStartserial3383084788/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.286021283s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-241439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-243605 --memory=3072 --mount-string /tmp/TestMountStartserial3383084788/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-243605 --memory=3072 --mount-string /tmp/TestMountStartserial3383084788/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.194914143s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-243605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-241439 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-241439 --alsologtostderr -v=5: (1.565801322s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-243605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-243605
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-243605: (1.28594226s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-243605
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-243605: (7.433870878s)
--- PASS: TestMountStart/serial/RestartStopped (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-243605 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-619557 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1227 09:46:06.959359  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-619557 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.748194292s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-619557 -- rollout status deployment/busybox: (3.675074469s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-sc4wt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-w9fl5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-sc4wt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-w9fl5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-sc4wt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-w9fl5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-sc4wt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-sc4wt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-w9fl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-619557 -- exec busybox-769dd8b7dd-w9fl5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-619557 -v=5 --alsologtostderr
E1227 09:47:53.494591  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-619557 -v=5 --alsologtostderr: (34.319354957s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-619557 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp testdata/cp-test.txt multinode-619557:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3102114870/001/cp-test_multinode-619557.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557:/home/docker/cp-test.txt multinode-619557-m02:/home/docker/cp-test_multinode-619557_multinode-619557-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test_multinode-619557_multinode-619557-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557:/home/docker/cp-test.txt multinode-619557-m03:/home/docker/cp-test_multinode-619557_multinode-619557-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test_multinode-619557_multinode-619557-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp testdata/cp-test.txt multinode-619557-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3102114870/001/cp-test_multinode-619557-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m02:/home/docker/cp-test.txt multinode-619557:/home/docker/cp-test_multinode-619557-m02_multinode-619557.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test_multinode-619557-m02_multinode-619557.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m02:/home/docker/cp-test.txt multinode-619557-m03:/home/docker/cp-test_multinode-619557-m02_multinode-619557-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test_multinode-619557-m02_multinode-619557-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp testdata/cp-test.txt multinode-619557-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3102114870/001/cp-test_multinode-619557-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m03:/home/docker/cp-test.txt multinode-619557:/home/docker/cp-test_multinode-619557-m03_multinode-619557.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557 "sudo cat /home/docker/cp-test_multinode-619557-m03_multinode-619557.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 cp multinode-619557-m03:/home/docker/cp-test.txt multinode-619557-m02:/home/docker/cp-test_multinode-619557-m03_multinode-619557-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 ssh -n multinode-619557-m02 "sudo cat /home/docker/cp-test_multinode-619557-m03_multinode-619557-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-619557 node stop m03: (1.309366958s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-619557 status: exit status 7 (552.086624ms)

                                                
                                                
-- stdout --
	multinode-619557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-619557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-619557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr: exit status 7 (525.46826ms)

                                                
                                                
-- stdout --
	multinode-619557
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-619557-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-619557-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:48:20.363872  715643 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:48:20.364048  715643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:48:20.364079  715643 out.go:374] Setting ErrFile to fd 2...
	I1227 09:48:20.364104  715643 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:48:20.364489  715643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:48:20.364770  715643 out.go:368] Setting JSON to false
	I1227 09:48:20.364832  715643 mustload.go:66] Loading cluster: multinode-619557
	I1227 09:48:20.365513  715643 config.go:182] Loaded profile config "multinode-619557": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:48:20.365555  715643 status.go:174] checking status of multinode-619557 ...
	I1227 09:48:20.366301  715643 cli_runner.go:164] Run: docker container inspect multinode-619557 --format={{.State.Status}}
	I1227 09:48:20.366735  715643 notify.go:221] Checking for updates...
	I1227 09:48:20.387217  715643 status.go:371] multinode-619557 host status = "Running" (err=<nil>)
	I1227 09:48:20.387240  715643 host.go:66] Checking if "multinode-619557" exists ...
	I1227 09:48:20.387606  715643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-619557
	I1227 09:48:20.407489  715643 host.go:66] Checking if "multinode-619557" exists ...
	I1227 09:48:20.407807  715643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:48:20.407862  715643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-619557
	I1227 09:48:20.430431  715643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33638 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/multinode-619557/id_rsa Username:docker}
	I1227 09:48:20.532358  715643 ssh_runner.go:195] Run: systemctl --version
	I1227 09:48:20.538600  715643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:48:20.551312  715643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1227 09:48:20.605476  715643 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-27 09:48:20.596117302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1227 09:48:20.606013  715643 kubeconfig.go:125] found "multinode-619557" server: "https://192.168.67.2:8443"
	I1227 09:48:20.606057  715643 api_server.go:166] Checking apiserver status ...
	I1227 09:48:20.606101  715643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1227 09:48:20.619042  715643 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2182/cgroup
	I1227 09:48:20.627631  715643 api_server.go:192] apiserver freezer: "8:freezer:/docker/862942f551ffa48e17a7fe64ea103726fc6f68d3a3644bdf3e5aea4aa02079fc/kubepods/burstable/pod26dcf9e12d949d8de0093265df2ea20b/34f77338e651c6339c397c98695fd433864f6664397e548f9330ebd6d32635b9"
	I1227 09:48:20.627708  715643 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/862942f551ffa48e17a7fe64ea103726fc6f68d3a3644bdf3e5aea4aa02079fc/kubepods/burstable/pod26dcf9e12d949d8de0093265df2ea20b/34f77338e651c6339c397c98695fd433864f6664397e548f9330ebd6d32635b9/freezer.state
	I1227 09:48:20.634940  715643 api_server.go:214] freezer state: "THAWED"
	I1227 09:48:20.634971  715643 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1227 09:48:20.644050  715643 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1227 09:48:20.644082  715643 status.go:463] multinode-619557 apiserver status = Running (err=<nil>)
	I1227 09:48:20.644093  715643 status.go:176] multinode-619557 status: &{Name:multinode-619557 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:48:20.644127  715643 status.go:174] checking status of multinode-619557-m02 ...
	I1227 09:48:20.644458  715643 cli_runner.go:164] Run: docker container inspect multinode-619557-m02 --format={{.State.Status}}
	I1227 09:48:20.663303  715643 status.go:371] multinode-619557-m02 host status = "Running" (err=<nil>)
	I1227 09:48:20.663333  715643 host.go:66] Checking if "multinode-619557-m02" exists ...
	I1227 09:48:20.663630  715643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-619557-m02
	I1227 09:48:20.682360  715643 host.go:66] Checking if "multinode-619557-m02" exists ...
	I1227 09:48:20.682669  715643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1227 09:48:20.682731  715643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-619557-m02
	I1227 09:48:20.699809  715643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33643 SSHKeyPath:/home/jenkins/minikube-integration/22343-548332/.minikube/machines/multinode-619557-m02/id_rsa Username:docker}
	I1227 09:48:20.795973  715643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1227 09:48:20.808578  715643 status.go:176] multinode-619557-m02 status: &{Name:multinode-619557-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:48:20.808608  715643 status.go:174] checking status of multinode-619557-m03 ...
	I1227 09:48:20.808910  715643 cli_runner.go:164] Run: docker container inspect multinode-619557-m03 --format={{.State.Status}}
	I1227 09:48:20.825464  715643 status.go:371] multinode-619557-m03 host status = "Stopped" (err=<nil>)
	I1227 09:48:20.825488  715643 status.go:384] host is not running, skipping remaining checks
	I1227 09:48:20.825495  715643 status.go:176] multinode-619557-m03 status: &{Name:multinode-619557-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-619557 node start m03 -v=5 --alsologtostderr: (8.53194212s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-619557
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-619557
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-619557: (23.257154878s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-619557 --wait=true -v=5 --alsologtostderr
E1227 09:49:16.547840  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-619557 --wait=true -v=5 --alsologtostderr: (56.289966484s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-619557
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-619557 node delete m03: (5.029835707s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-619557 stop: (21.708507717s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-619557 status: exit status 7 (94.146049ms)

                                                
                                                
-- stdout --
	multinode-619557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-619557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr: exit status 7 (89.710872ms)

                                                
                                                
-- stdout --
	multinode-619557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-619557-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1227 09:50:17.420535  729355 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:50:17.420664  729355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:17.420675  729355 out.go:374] Setting ErrFile to fd 2...
	I1227 09:50:17.420680  729355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:50:17.420917  729355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:50:17.421101  729355 out.go:368] Setting JSON to false
	I1227 09:50:17.421144  729355 mustload.go:66] Loading cluster: multinode-619557
	I1227 09:50:17.421216  729355 notify.go:221] Checking for updates...
	I1227 09:50:17.422171  729355 config.go:182] Loaded profile config "multinode-619557": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:50:17.422195  729355 status.go:174] checking status of multinode-619557 ...
	I1227 09:50:17.422727  729355 cli_runner.go:164] Run: docker container inspect multinode-619557 --format={{.State.Status}}
	I1227 09:50:17.440382  729355 status.go:371] multinode-619557 host status = "Stopped" (err=<nil>)
	I1227 09:50:17.440406  729355 status.go:384] host is not running, skipping remaining checks
	I1227 09:50:17.440413  729355 status.go:176] multinode-619557 status: &{Name:multinode-619557 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1227 09:50:17.440440  729355 status.go:174] checking status of multinode-619557-m02 ...
	I1227 09:50:17.440730  729355 cli_runner.go:164] Run: docker container inspect multinode-619557-m02 --format={{.State.Status}}
	I1227 09:50:17.464409  729355 status.go:371] multinode-619557-m02 host status = "Stopped" (err=<nil>)
	I1227 09:50:17.464439  729355 status.go:384] host is not running, skipping remaining checks
	I1227 09:50:17.464446  729355 status.go:176] multinode-619557-m02 status: &{Name:multinode-619557-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-619557 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1227 09:51:06.959389  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-619557 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (53.393047816s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-619557 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-619557
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-619557-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-619557-m02 --driver=docker  --container-runtime=docker: exit status 14 (86.737377ms)

                                                
                                                
-- stdout --
	* [multinode-619557-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-619557-m02' is duplicated with machine name 'multinode-619557-m02' in profile 'multinode-619557'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-619557-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-619557-m03 --driver=docker  --container-runtime=docker: (29.98969718s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-619557
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-619557: exit status 80 (338.786223ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-619557 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-619557-m03 already exists in multinode-619557-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-619557-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-619557-m03: (2.266053409s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.73s)

                                                
                                    
x
+
TestScheduledStopUnix (101.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-621449 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-621449 --memory=3072 --driver=docker  --container-runtime=docker: (28.683421682s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621449 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:52:17.280300  743159 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:52:17.280441  743159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:17.280452  743159 out.go:374] Setting ErrFile to fd 2...
	I1227 09:52:17.280458  743159 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:17.280707  743159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:52:17.280959  743159 out.go:368] Setting JSON to false
	I1227 09:52:17.281077  743159 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:17.281431  743159 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:52:17.281511  743159 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/scheduled-stop-621449/config.json ...
	I1227 09:52:17.281695  743159 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:17.281867  743159 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-621449 -n scheduled-stop-621449
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621449 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:52:17.725258  743250 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:52:17.725433  743250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:17.725463  743250 out.go:374] Setting ErrFile to fd 2...
	I1227 09:52:17.725484  743250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:17.725875  743250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:52:17.726228  743250 out.go:368] Setting JSON to false
	I1227 09:52:17.727267  743250 daemonize_unix.go:73] killing process 743182 as it is an old scheduled stop
	I1227 09:52:17.730801  743250 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:17.731343  743250 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:52:17.731437  743250 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/scheduled-stop-621449/config.json ...
	I1227 09:52:17.731636  743250 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:17.731749  743250 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1227 09:52:17.737542  550197 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/scheduled-stop-621449/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621449 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621449 -n scheduled-stop-621449
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621449
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621449 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1227 09:52:43.636009  743974 out.go:360] Setting OutFile to fd 1 ...
	I1227 09:52:43.636207  743974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:43.636239  743974 out.go:374] Setting ErrFile to fd 2...
	I1227 09:52:43.636262  743974 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1227 09:52:43.636558  743974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22343-548332/.minikube/bin
	I1227 09:52:43.636854  743974 out.go:368] Setting JSON to false
	I1227 09:52:43.636993  743974 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:43.637401  743974 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1227 09:52:43.637508  743974 profile.go:143] Saving config to /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/scheduled-stop-621449/config.json ...
	I1227 09:52:43.637722  743974 mustload.go:66] Loading cluster: scheduled-stop-621449
	I1227 09:52:43.637872  743974 config.go:182] Loaded profile config "scheduled-stop-621449": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E1227 09:52:53.494098  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621449
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-621449: exit status 7 (65.323162ms)

                                                
                                                
-- stdout --
	scheduled-stop-621449
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621449 -n scheduled-stop-621449
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621449 -n scheduled-stop-621449: exit status 7 (61.469041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-621449" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-621449
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-621449: (1.630038553s)
--- PASS: TestScheduledStopUnix (101.87s)

                                                
                                    
x
+
TestSkaffold (137.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2925208292 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-964044 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-964044 --memory=3072 --driver=docker  --container-runtime=docker: (29.847707803s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2925208292 run --minikube-profile skaffold-964044 --kube-context skaffold-964044 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2925208292 run --minikube-profile skaffold-964044 --kube-context skaffold-964044 --status-check=true --port-forward=false --interactive=false: (1m31.398928481s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-8456467974-rsrp6" [d175f5f2-43d1-48b7-9d2a-c5ad645b6c39] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003247784s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-77697bb56b-nw2pc" [e94ad21f-a8f3-4307-9f14-126f9d167969] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.004444619s
helpers_test.go:176: Cleaning up "skaffold-964044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-964044
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-964044: (3.115341225s)
--- PASS: TestSkaffold (137.79s)

                                                
                                    
x
+
TestInsufficientStorage (12.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-949656 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-949656 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.521509124s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9392ba62-7d65-48e5-b83d-6b67d906cd39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-949656] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"757c2709-a1ad-4854-b926-e641a14d1725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22343"}}
	{"specversion":"1.0","id":"0ee6feff-de53-440e-9704-8907384b3bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d659ea78-eeb8-4b74-b5af-a7bf000327e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig"}}
	{"specversion":"1.0","id":"fe3603b0-b4ac-4748-9f5c-ec09b4ddf1bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube"}}
	{"specversion":"1.0","id":"be8c6a5d-ad7c-4529-9842-43c798958130","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6fbc8228-7af8-4256-8142-6aae7d7811ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"37ea5490-4357-4d93-b385-2b978429f26f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b2071a40-e30b-4902-a0db-0de411acbde6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"61be8acc-1bf6-4811-b94d-76dba6cb45da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebc33548-77a2-4dfa-a60b-b60640bbd333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c534fe54-886a-4704-aee7-1395b5006cb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-949656\" primary control-plane node in \"insufficient-storage-949656\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fb816e9-5037-498b-b781-a60e6e32e952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766570851-22316 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"17b6e57f-77cc-40d7-9516-30f258c96e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"76e03efe-1d44-4020-9764-6c3578490128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-949656 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-949656 --output=json --layout=cluster: exit status 7 (283.635091ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-949656","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-949656","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:55:58.981090  754525 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-949656" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-949656 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-949656 --output=json --layout=cluster: exit status 7 (302.565998ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-949656","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-949656","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1227 09:55:59.283916  754592 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-949656" does not appear in /home/jenkins/minikube-integration/22343-548332/kubeconfig
	E1227 09:55:59.293757  754592 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/insufficient-storage-949656/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-949656" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-949656
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-949656: (1.765041928s)
--- PASS: TestInsufficientStorage (12.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (316.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1450388131 start -p running-upgrade-467375 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1227 10:12:53.494029  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1450388131 start -p running-upgrade-467375 --memory=3072 --vm-driver=docker  --container-runtime=docker: (30.84464786s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-467375 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1227 10:15:33.039182  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:15:50.017325  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:16:06.959775  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-467375 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.370421007s)
helpers_test.go:176: Cleaning up "running-upgrade-467375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-467375
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-467375: (2.167135186s)
--- PASS: TestRunningBinaryUpgrade (316.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.110862159s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-168776 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-168776 --alsologtostderr: (2.204561695s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-168776 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-168776 status --format={{.Host}}: exit status 7 (72.912759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.921377981s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-168776 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (92.350611ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168776] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-168776
	    minikube start -p kubernetes-upgrade-168776 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1687762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-168776 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1227 10:16:56.087988  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-168776 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.037005495s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-168776" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-168776
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-168776: (2.690634984s)
--- PASS: TestKubernetesUpgrade (342.23s)

                                                
                                    
x
+
TestMissingContainerUpgrade (82.08s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.4117179503 start -p missing-upgrade-301269 --memory=3072 --driver=docker  --container-runtime=docker
E1227 10:10:33.038314  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.4117179503 start -p missing-upgrade-301269 --memory=3072 --driver=docker  --container-runtime=docker: (31.543110683s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-301269
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-301269
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-301269 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1227 10:11:06.959603  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-301269 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.550447121s)
helpers_test.go:176: Cleaning up "missing-upgrade-301269" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-301269
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-301269: (2.329670455s)
--- PASS: TestMissingContainerUpgrade (82.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (120.47677ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-243408] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22343
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22343-548332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22343-548332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-243408 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1227 09:56:06.962103  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-243408 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.000612412s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-243408 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (16.26819288s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-243408 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-243408 status -o json: exit status 2 (338.040186ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-243408","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-243408
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-243408: (1.797081041s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-243408 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.879158596s)
--- PASS: TestNoKubernetes/serial/Start (8.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22343-548332/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-243408 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-243408 "sudo systemctl is-active --quiet service kubelet": exit status 1 (302.199987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-243408
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-243408: (1.303526558s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-243408 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-243408 --driver=docker  --container-runtime=docker: (8.20265521s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-243408 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-243408 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.482786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (336.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.1027513555 start -p stopped-upgrade-784214 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1227 10:07:53.493758  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.1027513555 start -p stopped-upgrade-784214 --memory=3072 --vm-driver=docker  --container-runtime=docker: (57.8655059s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.1027513555 -p stopped-upgrade-784214 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.1027513555 -p stopped-upgrade-784214 stop: (10.887556932s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-784214 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-784214 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.068574058s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (336.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-784214
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (89.12s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-754592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E1227 10:17:53.493732  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-754592 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m16.975927368s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-754592 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-754592
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-754592: (11.281632872s)
--- PASS: TestPreload/Start-NoPreload-PullImage (89.12s)

                                                
                                    
x
+
TestPause/serial/Start (69.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-122074 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-122074 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m9.923614644s)
--- PASS: TestPause/serial/Start (69.92s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (52.76s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-754592 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-754592 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (52.426152354s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-754592 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (52.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-122074 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-122074 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.743849708s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m16.384006154s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-122074 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-122074 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-122074 --output=json --layout=cluster: exit status 2 (439.306755ms)

                                                
                                                
-- stdout --
	{"Name":"pause-122074","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-122074","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-122074 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-122074 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-122074 --alsologtostderr -v=5: (1.055295654s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.53s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-122074 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-122074 --alsologtostderr -v=5: (2.526951655s)
--- PASS: TestPause/serial/DeletePaused (2.53s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-122074
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-122074: exit status 1 (32.437405ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-122074: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1227 10:20:33.038446  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (51.683281076s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-fc89c" [4e869848-2e3a-45ee-9e9d-8e794b325429] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003742925s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-334346 "pgrep -a kubelet"
I1227 10:20:47.182382  550197 config.go:182] Loaded profile config "auto-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-2ckfg" [8b42ecf4-f64e-4804-b02e-4d827d1b90ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-2ckfg" [8b42ecf4-f64e-4804-b02e-4d827d1b90ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003818495s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-334346 "pgrep -a kubelet"
I1227 10:20:49.951509  550197 config.go:182] Loaded profile config "flannel-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-9lcv8" [70bf2216-202c-432a-8b56-0d06e754d694] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-9lcv8" [70bf2216-202c-432a-8b56-0d06e754d694] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005650889s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m21.002020846s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.813192026s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-334346 "pgrep -a kubelet"
I1227 10:22:20.760984  550197 config.go:182] Loaded profile config "custom-flannel-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-kvxcp" [8ed9b214-237b-415e-a19e-9c8a552ba688] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-kvxcp" [8ed9b214-237b-415e-a19e-9c8a552ba688] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003235165s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-qbnv2" [a53a2350-ea8c-4e10-a53d-216aa317d592] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00272711s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-334346 "pgrep -a kubelet"
I1227 10:22:50.023982  550197 config.go:182] Loaded profile config "calico-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5gwxc" [a9adfebb-61fc-4318-b8e3-fee492229c3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 10:22:53.494342  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-5gwxc" [a9adfebb-61fc-4318-b8e3-fee492229c3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004671294s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m17.627986689s)
--- PASS: TestNetworkPlugins/group/false/Start (77.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (52.782506711s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-334346 "pgrep -a kubelet"
I1227 10:24:14.566018  550197 config.go:182] Loaded profile config "false-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-455rb" [54578025-a345-4e03-ad6d-e93aeb67be8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-455rb" [54578025-a345-4e03-ad6d-e93aeb67be8e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004202171s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-454f7" [cf430bc5-f417-4347-9026-8a43579ac954] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003969598s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-334346 "pgrep -a kubelet"
I1227 10:24:28.155602  550197 config.go:182] Loaded profile config "kindnet-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-r76t2" [0a174718-64c6-433a-9697-d071d52e2660] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-r76t2" [0a174718-64c6-433a-9697-d071d52e2660] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003876702s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (72.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m12.27506631s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (72.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1227 10:25:33.038832  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.568192  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.573468  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.583716  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.603979  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.644490  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.724831  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:43.885635  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:44.206181  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:44.846811  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:46.127284  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.438389  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.443743  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.454225  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.474638  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.514889  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.595320  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:47.756488  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:48.077032  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:48.687510  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:48.717759  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:49.998423  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:52.559277  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:53.808185  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:25:57.679821  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m12.027395412s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-334346 "pgrep -a kubelet"
I1227 10:25:59.671199  550197 config.go:182] Loaded profile config "kubenet-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mltrr" [e081ce59-d753-451a-aa86-be20743dee3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mltrr" [e081ce59-d753-451a-aa86-be20743dee3f] Running
E1227 10:26:04.049204  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:06.959435  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:07.920097  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00367504s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-334346 "pgrep -a kubelet"
I1227 10:26:17.307856  550197 config.go:182] Loaded profile config "enable-default-cni-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8shjz" [cebe67ad-e6dc-47b2-badc-a3572ce30ef5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8shjz" [cebe67ad-e6dc-47b2-badc-a3572ce30ef5] Running
E1227 10:26:24.529368  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:26:28.400584  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.002912608s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-334346 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m19.012058926s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (93.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-478160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1227 10:27:05.489616  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:09.361613  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.142891  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.148148  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.158418  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.178713  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.218998  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.299340  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.459696  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:21.780330  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:22.421187  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:23.701626  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:26.261832  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:31.382519  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:41.623658  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.624048  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.629298  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.639623  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.659941  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.700453  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.780863  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:43.941427  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:44.261975  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:44.902193  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:46.182516  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:48.742738  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-478160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m33.886848826s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (93.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-334346 "pgrep -a kubelet"
I1227 10:27:51.428269  550197 config.go:182] Loaded profile config "bridge-334346": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-334346 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-vnjt2" [ce5adc5e-dae9-4e17-9e0c-5f79fedc4795] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1227 10:27:53.493968  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:27:53.863599  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-vnjt2" [ce5adc5e-dae9-4e17-9e0c-5f79fedc4795] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003772087s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-334346 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1227 10:28:02.104684  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-334346 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-675096 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:28:24.584945  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-675096 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (42.607463434s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-478160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [af5d4856-09b9-46ee-90f7-0f445d4ca3ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 10:28:27.411487  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:28:31.282270  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [af5d4856-09b9-46ee-90f7-0f445d4ca3ad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004140157s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-478160 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-478160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-478160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.241985727s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-478160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-478160 --alsologtostderr -v=3
E1227 10:28:43.065728  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-478160 --alsologtostderr -v=3: (11.537789794s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-478160 -n old-k8s-version-478160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-478160 -n old-k8s-version-478160: exit status 7 (104.579382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-478160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (59.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-478160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-478160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (58.669674376s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-478160 -n old-k8s-version-478160
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (59.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-675096 create -f testdata/busybox.yaml
E1227 10:29:05.548506  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9fbb60d6-ae51-4ae9-86d5-e3b568c5189f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9fbb60d6-ae51-4ae9-86d5-e3b568c5189f] Running
E1227 10:29:14.837920  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:14.843358  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:14.853828  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:14.874155  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:14.914639  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:14.995172  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:15.155735  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:15.476317  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00376113s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-675096 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-675096 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1227 10:29:16.117572  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-675096 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-675096 --alsologtostderr -v=3
E1227 10:29:17.398712  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:19.959382  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:21.858639  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:21.863744  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:21.873985  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:21.894236  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:21.934499  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:22.014792  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:22.175050  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:22.496046  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:23.136473  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:24.416709  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:25.080541  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:26.977580  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-675096 --alsologtostderr -v=3: (11.335327069s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-675096 -n embed-certs-675096
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-675096 -n embed-certs-675096: exit status 7 (70.71009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-675096 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-675096 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:29:32.098328  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:35.321582  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:29:42.338784  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-675096 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (52.982169347s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-675096 -n embed-certs-675096
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-rx7gw" [486e23da-5324-4c84-bae4-07278627687f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003942426s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-rx7gw" [486e23da-5324-4c84-bae4-07278627687f] Running
E1227 10:29:55.801854  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003808327s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-478160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-478160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-478160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-478160 -n old-k8s-version-478160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-478160 -n old-k8s-version-478160: exit status 2 (434.230721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-478160 -n old-k8s-version-478160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-478160 -n old-k8s-version-478160: exit status 2 (366.987872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-478160 --alsologtostderr -v=1
E1227 10:30:02.819332  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-478160 -n old-k8s-version-478160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-478160 -n old-k8s-version-478160
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-747766 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-747766 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m19.329900948s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f5b5v" [1a2d0ec4-3cc8-4af9-a157-50699f1f3a63] Running
E1227 10:30:27.469052  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005164145s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-f5b5v" [1a2d0ec4-3cc8-4af9-a157-50699f1f3a63] Running
E1227 10:30:33.038634  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003966596s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-675096 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-675096 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-675096 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-675096 --alsologtostderr -v=1: (1.02857936s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-675096 -n embed-certs-675096
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-675096 -n embed-certs-675096: exit status 2 (459.383402ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-675096 -n embed-certs-675096
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-675096 -n embed-certs-675096: exit status 2 (434.049737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-675096 --alsologtostderr -v=1
E1227 10:30:36.762380  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-675096 -n embed-certs-675096
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-675096 -n embed-certs-675096
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-767370 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:30:43.567884  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:43.779494  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:47.438089  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:59.909568  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:59.914802  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:59.925043  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:59.945294  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:30:59.985544  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:00.065966  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:00.243638  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:00.564301  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:01.204894  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:02.485287  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:05.046270  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:06.959263  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:10.167397  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:11.251920  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:15.123292  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/auto-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.638996  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.644313  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.654595  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.674865  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.715267  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.795660  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:17.956083  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:18.276371  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:18.917100  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:20.197338  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:20.407702  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:22.757567  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-767370 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m10.521099174s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-747766 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6ce454bd-5be2-4b5e-95bd-413a487d0169] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1227 10:31:27.878195  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [6ce454bd-5be2-4b5e-95bd-413a487d0169] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004096388s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-747766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-747766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-747766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-747766 --alsologtostderr -v=3
E1227 10:31:38.118719  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:40.887935  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-747766 --alsologtostderr -v=3: (11.346916431s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747766 -n no-preload-747766
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747766 -n no-preload-747766: exit status 7 (85.528027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-747766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-747766 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-747766 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (54.702351997s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747766 -n no-preload-747766
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-767370 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7416f7d7-03a4-4a0c-b2ef-c243bc0d991b] Pending
helpers_test.go:353: "busybox" [7416f7d7-03a4-4a0c-b2ef-c243bc0d991b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7416f7d7-03a4-4a0c-b2ef-c243bc0d991b] Running
E1227 10:31:58.599105  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:31:58.682975  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005307249s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-767370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-767370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-767370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.439109876s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-767370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-767370 --alsologtostderr -v=3
E1227 10:32:05.700530  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kindnet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-767370 --alsologtostderr -v=3: (11.668495534s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370: exit status 7 (68.734588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-767370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-767370 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:32:21.143373  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:21.848474  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:30.017714  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/addons-071879/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:39.560613  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:43.624695  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-767370 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (55.534786934s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4h78r" [f6afa136-4077-4941-83a9-0ca9fa359b63] Running
E1227 10:32:48.826416  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/custom-flannel-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002629986s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4h78r" [f6afa136-4077-4941-83a9-0ca9fa359b63] Running
E1227 10:32:51.708858  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:51.714084  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:51.724612  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:51.745005  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:51.785690  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:51.866157  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:52.026653  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:52.347209  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:52.987450  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:53.494682  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/functional-918607/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:32:54.267673  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003684839s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-747766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-747766 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-747766 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747766 -n no-preload-747766
E1227 10:32:56.828857  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747766 -n no-preload-747766: exit status 2 (349.289142ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747766 -n no-preload-747766
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747766 -n no-preload-747766: exit status 2 (337.867858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-747766 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747766 -n no-preload-747766
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747766 -n no-preload-747766
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-720267 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:33:01.951332  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:33:11.310043  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/calico-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-720267 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (40.042231949s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l2xq5" [d42e64f4-65cf-4d97-921a-8cfba21a8558] Running
E1227 10:33:12.191784  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003463226s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l2xq5" [d42e64f4-65cf-4d97-921a-8cfba21a8558] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003964719s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-767370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-767370 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-767370 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370: exit status 2 (402.995059ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370: exit status 2 (422.135738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-767370 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-767370 -n default-k8s-diff-port-767370
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.06s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-765643 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E1227 10:33:32.406857  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/old-k8s-version-478160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:33:32.672469  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-765643 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (3.819047089s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-765643" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-765643
--- PASS: TestPreload/PreloadSrc/gcs (4.06s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.67s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-003348 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E1227 10:33:36.088456  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/skaffold-964044/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:33:37.528067  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/old-k8s-version-478160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-003348 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.427038945s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-003348" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-003348
--- PASS: TestPreload/PreloadSrc/github (4.67s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.59s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-332084 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-332084" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-332084
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-720267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-720267 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.62967819s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-720267 --alsologtostderr -v=3
E1227 10:33:43.769422  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/kubenet-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:33:47.769238  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/old-k8s-version-478160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-720267 --alsologtostderr -v=3: (11.215728756s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-720267 -n newest-cni-720267
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-720267 -n newest-cni-720267: exit status 7 (71.723143ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-720267 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-720267 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1227 10:34:01.481403  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/enable-default-cni-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1227 10:34:08.249768  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/old-k8s-version-478160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-720267 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (16.9493469s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-720267 -n newest-cni-720267
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-720267 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-720267 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-720267 -n newest-cni-720267
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-720267 -n newest-cni-720267: exit status 2 (322.912517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-720267 -n newest-cni-720267
E1227 10:34:13.633630  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/bridge-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-720267 -n newest-cni-720267: exit status 2 (316.516613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-720267 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-720267 -n newest-cni-720267
E1227 10:34:14.837780  550197 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/false-334346/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-720267 -n newest-cni-720267
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.94s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-089639 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-089639" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-089639
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-334346 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-334346" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22343-548332/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:56:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-663445
contexts:
- context:
cluster: offline-docker-663445
extensions:
- extension:
last-update: Sat, 27 Dec 2025 09:56:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-docker-663445
name: offline-docker-663445
current-context: offline-docker-663445
kind: Config
preferences: {}
users:
- name: offline-docker-663445
user:
client-certificate: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/offline-docker-663445/client.crt
client-key: /home/jenkins/minikube-integration/22343-548332/.minikube/profiles/offline-docker-663445/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-334346

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-334346" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-334346"

                                                
                                                
----------------------- debugLogs end: cilium-334346 [took: 4.154559279s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-334346" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-334346
--- SKIP: TestNetworkPlugins/group/cilium (4.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-055187" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-055187
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard