Test Report: Docker_Linux_docker_arm64 22414

                    
                      7225a17c4161ad48c671012cf8528dba752659f9:2026-01-10:43179
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 508.91
53 TestForceSystemdEnv 508.79
x
+
TestForceSystemdFlag (508.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-389625 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0110 02:32:55.261227 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:12.146258 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:45.884914 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:45.890323 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:45.900733 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:45.920998 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:45.961291 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:46.041718 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:46.202126 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:46.522823 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:47.163780 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:48.444452 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:51.006158 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:34:56.126407 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:35:06.366837 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:35:26.847082 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:36:07.808692 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:36:09.096432 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:37:29.728903 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:37:55.261192 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:39:45.884463 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-389625 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m24.168711646s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-389625] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-389625" primary control-plane node in "force-systemd-flag-389625" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:31:31.403273 2444942 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:31:31.403569 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403599 2444942 out.go:374] Setting ErrFile to fd 2...
	I0110 02:31:31.403618 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403919 2444942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:31:31.404424 2444942 out.go:368] Setting JSON to false
	I0110 02:31:31.405395 2444942 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36841,"bootTime":1767975451,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:31:31.405497 2444942 start.go:143] virtualization:  
	I0110 02:31:31.408819 2444942 out.go:179] * [force-systemd-flag-389625] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:31:31.412885 2444942 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:31:31.412964 2444942 notify.go:221] Checking for updates...
	I0110 02:31:31.425190 2444942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:31:31.428163 2444942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:31:31.431030 2444942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:31:31.433941 2444942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:31:31.436853 2444942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:31:31.440329 2444942 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:31.440445 2444942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:31:31.473277 2444942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:31:31.473389 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.569510 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.559356986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.569623 2444942 docker.go:319] overlay module found
	I0110 02:31:31.577195 2444942 out.go:179] * Using the docker driver based on user configuration
	I0110 02:31:31.580216 2444942 start.go:309] selected driver: docker
	I0110 02:31:31.580239 2444942 start.go:928] validating driver "docker" against <nil>
	I0110 02:31:31.580254 2444942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:31:31.580972 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.685470 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.673022095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.685622 2444942 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:31:31.685842 2444942 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:31:31.695072 2444942 out.go:179] * Using Docker driver with root privileges
	I0110 02:31:31.704472 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:31.704566 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:31.704582 2444942 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 02:31:31.704671 2444942 start.go:353] cluster config:
	{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:31.716615 2444942 out.go:179] * Starting "force-systemd-flag-389625" primary control-plane node in "force-systemd-flag-389625" cluster
	I0110 02:31:31.725232 2444942 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 02:31:31.731542 2444942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:31:31.734740 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:31.734792 2444942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 02:31:31.734803 2444942 cache.go:65] Caching tarball of preloaded images
	I0110 02:31:31.734922 2444942 preload.go:251] Found /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 02:31:31.734933 2444942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 02:31:31.735052 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:31.735070 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json: {Name:mkf231dfddb62b8df14c42136e70d1c72c396e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:31.735223 2444942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:31:31.768290 2444942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:31:31.768314 2444942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:31:31.768329 2444942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:31:31.768360 2444942 start.go:360] acquireMachinesLock for force-systemd-flag-389625: {Name:mkda4641748142b11aadec6867161d872c9610a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:31:31.768468 2444942 start.go:364] duration metric: took 88.236µs to acquireMachinesLock for "force-systemd-flag-389625"
	I0110 02:31:31.768503 2444942 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 02:31:31.768575 2444942 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:31:31.770687 2444942 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:31.770958 2444942 start.go:159] libmachine.API.Create for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:31.770996 2444942 client.go:173] LocalClient.Create starting
	I0110 02:31:31.771061 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:31.771107 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771131 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771194 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:31.771216 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771231 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771599 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:31.789231 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:31.789311 2444942 network_create.go:284] running [docker network inspect force-systemd-flag-389625] to gather additional debugging logs...
	I0110 02:31:31.789330 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625
	W0110 02:31:31.804491 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 returned with exit code 1
	I0110 02:31:31.804519 2444942 network_create.go:287] error running [docker network inspect force-systemd-flag-389625]: docker network inspect force-systemd-flag-389625: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-389625 not found
	I0110 02:31:31.804531 2444942 network_create.go:289] output of [docker network inspect force-systemd-flag-389625]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-389625 not found
	
	** /stderr **
	I0110 02:31:31.804633 2444942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:31.821447 2444942 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:31.821788 2444942 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:31.822120 2444942 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:31.822532 2444942 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977430}
	I0110 02:31:31.822549 2444942 network_create.go:124] attempt to create docker network force-systemd-flag-389625 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:31:31.822614 2444942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-389625 force-systemd-flag-389625
	I0110 02:31:31.879729 2444942 network_create.go:108] docker network force-systemd-flag-389625 192.168.76.0/24 created
	I0110 02:31:31.879758 2444942 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-389625" container
	I0110 02:31:31.879830 2444942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:31.907715 2444942 cli_runner.go:164] Run: docker volume create force-systemd-flag-389625 --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:31.939677 2444942 oci.go:103] Successfully created a docker volume force-systemd-flag-389625
	I0110 02:31:31.939777 2444942 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:33.763406 2444942 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (1.823586252s)
	I0110 02:31:33.763439 2444942 oci.go:107] Successfully prepared a docker volume force-systemd-flag-389625
	I0110 02:31:33.763488 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:33.763505 2444942 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:33.763585 2444942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:36.676943 2444942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.913316987s)
	I0110 02:31:36.676976 2444942 kic.go:203] duration metric: took 2.913468033s to extract preloaded images to volume ...
	W0110 02:31:36.677157 2444942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:36.677267 2444942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:36.733133 2444942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-389625 --name force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-389625 --network force-systemd-flag-389625 --ip 192.168.76.2 --volume force-systemd-flag-389625:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:37.020083 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Running}}
	I0110 02:31:37.049554 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.073410 2444942 cli_runner.go:164] Run: docker exec force-systemd-flag-389625 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:37.123872 2444942 oci.go:144] the created container "force-systemd-flag-389625" has a running status.
	I0110 02:31:37.123914 2444942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa...
	I0110 02:31:37.219546 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:37.219643 2444942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:37.246178 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.265663 2444942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:37.265687 2444942 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-389625 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:37.315490 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.344025 2444942 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:37.344113 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:37.365329 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.366213 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:37.366237 2444942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:37.366917 2444942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:31:40.525424 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.525452 2444942 ubuntu.go:182] provisioning hostname "force-systemd-flag-389625"
	I0110 02:31:40.525529 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.550883 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.551514 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.551534 2444942 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-389625 && echo "force-systemd-flag-389625" | sudo tee /etc/hostname
	I0110 02:31:40.741599 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.741787 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.769891 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.770349 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.770376 2444942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-389625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-389625/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-389625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:40.933268 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:40.933300 2444942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:40.933334 2444942 ubuntu.go:190] setting up certificates
	I0110 02:31:40.933344 2444942 provision.go:84] configureAuth start
	I0110 02:31:40.933425 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:40.954041 2444942 provision.go:143] copyHostCerts
	I0110 02:31:40.954074 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954109 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:40.954115 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954187 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:40.954287 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954306 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:40.954311 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954348 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:40.954426 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954443 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:40.954447 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954472 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:40.954527 2444942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-389625 san=[127.0.0.1 192.168.76.2 force-systemd-flag-389625 localhost minikube]
	I0110 02:31:41.170708 2444942 provision.go:177] copyRemoteCerts
	I0110 02:31:41.170784 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:41.170832 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.191286 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:41.302379 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:41.302491 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:31:41.325187 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:41.325316 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:41.349568 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:41.349680 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:41.371181 2444942 provision.go:87] duration metric: took 437.80859ms to configureAuth
	I0110 02:31:41.371265 2444942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:41.371507 2444942 config.go:182] Loaded profile config "force-systemd-flag-389625": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:41.371603 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.397226 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.397537 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.397547 2444942 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:41.564217 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:41.564316 2444942 ubuntu.go:71] root file system type: overlay
	I0110 02:31:41.564502 2444942 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:41.564636 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.591765 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.592086 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.592175 2444942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:41.761531 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:41.761616 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.782449 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.782827 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.782851 2444942 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:43.042474 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:41.754593192 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:43.042496 2444942 machine.go:97] duration metric: took 5.698448584s to provisionDockerMachine
	I0110 02:31:43.042508 2444942 client.go:176] duration metric: took 11.271502022s to LocalClient.Create
	I0110 02:31:43.042522 2444942 start.go:167] duration metric: took 11.271565709s to libmachine.API.Create "force-systemd-flag-389625"
	I0110 02:31:43.042529 2444942 start.go:293] postStartSetup for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:43.042539 2444942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:43.042594 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:43.042629 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.076614 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.196482 2444942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:43.201700 2444942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:43.201726 2444942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:43.201737 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:43.201796 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:43.201877 2444942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:43.201885 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:43.201986 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:43.214196 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:43.241904 2444942 start.go:296] duration metric: took 199.360809ms for postStartSetup
	I0110 02:31:43.242273 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.263273 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:43.263543 2444942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:43.263584 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.283380 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.391153 2444942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:43.396781 2444942 start.go:128] duration metric: took 11.628189455s to createHost
	I0110 02:31:43.396804 2444942 start.go:83] releasing machines lock for "force-systemd-flag-389625", held for 11.628322055s
	I0110 02:31:43.396875 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.415596 2444942 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:43.415661 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.415925 2444942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:43.415983 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.442514 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.477676 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.711077 2444942 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:43.721326 2444942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:43.726734 2444942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:43.726807 2444942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:43.760612 2444942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:43.760636 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:43.760650 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:43.760747 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:43.776486 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:43.785831 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:43.795047 2444942 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:43.795106 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:43.804716 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.814084 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:43.823155 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.832515 2444942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:43.841283 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:43.850677 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:43.859949 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:43.869426 2444942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:43.878026 2444942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:43.886454 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.030564 2444942 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:44.134281 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:44.134314 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:44.134390 2444942 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:44.164357 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.178141 2444942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:44.203502 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.225293 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:44.259875 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:44.298197 2444942 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:44.302282 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:44.310035 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:44.323184 2444942 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:44.479958 2444942 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:44.628745 2444942 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:44.628855 2444942 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:44.646424 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:44.659407 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.806969 2444942 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:45.429132 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:45.449741 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:45.466128 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:45.483936 2444942 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:45.652722 2444942 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:45.851372 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.020791 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:46.040175 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:46.054245 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.202922 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:46.282568 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:46.299250 2444942 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:46.299324 2444942 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:46.304150 2444942 start.go:574] Will wait 60s for crictl version
	I0110 02:31:46.304219 2444942 ssh_runner.go:195] Run: which crictl
	I0110 02:31:46.309882 2444942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:46.365333 2444942 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:46.365407 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.397294 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.430776 2444942 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:46.430856 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:46.446745 2444942 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:46.450899 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.460438 2444942 kubeadm.go:884] updating cluster {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:46.460546 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:46.460598 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.482795 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.482816 2444942 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:46.482894 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.503709 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.503732 2444942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:46.503741 2444942 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:46.503828 2444942 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-389625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:46.503890 2444942 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:46.568277 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:46.568357 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:46.568393 2444942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:46.568445 2444942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-389625 NodeName:force-systemd-flag-389625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:46.568620 2444942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-389625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:46.568728 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:46.576738 2444942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:46.576804 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:46.584333 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0110 02:31:46.597086 2444942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:46.609903 2444942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0110 02:31:46.623198 2444942 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:46.627340 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.637410 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.813351 2444942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:46.853529 2444942 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625 for IP: 192.168.76.2
	I0110 02:31:46.853605 2444942 certs.go:195] generating shared ca certs ...
	I0110 02:31:46.853636 2444942 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.853847 2444942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:46.853930 2444942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:46.853957 2444942 certs.go:257] generating profile certs ...
	I0110 02:31:46.854046 2444942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key
	I0110 02:31:46.854089 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt with IP's: []
	I0110 02:31:46.947349 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt ...
	I0110 02:31:46.947424 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt: {Name:mkc2a0e18aeb9bc161a2b7bdc69edce7c225059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947656 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key ...
	I0110 02:31:46.947692 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key: {Name:mkbec37be7fe98f01eeac1efcff3341ee3c0872e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947838 2444942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11
	I0110 02:31:46.947881 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:31:47.211172 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 ...
	I0110 02:31:47.211243 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11: {Name:mkb26b4fa8a855d6ab75cf6ae5986179421e433d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211463 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 ...
	I0110 02:31:47.211500 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11: {Name:mkaede7629652a36b550448eb511dc667db770a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211648 2444942 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt
	I0110 02:31:47.211795 2444942 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key
	I0110 02:31:47.211904 2444942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key
	I0110 02:31:47.211947 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt with IP's: []
	I0110 02:31:47.431675 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt ...
	I0110 02:31:47.431751 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt: {Name:mkf0c56bc6a962d35ef411e8b1db0da0dee06e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.431961 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key ...
	I0110 02:31:47.431997 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key: {Name:mk1b1a2249d88d087b490ca8bc1af9bab6c5cd65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.432136 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:47.432180 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:47.432212 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:47.432258 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:47.432293 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:47.432322 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:47.432364 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:47.432398 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:47.432482 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:47.432539 2444942 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:47.432564 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:47.432623 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:47.432673 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:47.432730 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:47.432801 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:47.432861 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.432896 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.432926 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.433610 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:47.453555 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:47.472772 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:47.493487 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:47.513383 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:47.534626 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:31:47.554446 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:47.574178 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:47.594420 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:47.614798 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:47.635266 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:47.655406 2444942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:47.670021 2444942 ssh_runner.go:195] Run: openssl version
	I0110 02:31:47.676614 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.684815 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:47.693216 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697583 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697646 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.771210 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:47.792458 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:47.806445 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.828400 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:47.841461 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847202 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847317 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.889947 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:47.898442 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:47.910391 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.918871 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:47.928363 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932866 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932981 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.975611 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.984122 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.992727 2444942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:47.997508 2444942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:47.997608 2444942 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:47.997780 2444942 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:48.015607 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:48.027609 2444942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:48.037195 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:48.037364 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:48.049830 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:48.049901 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:48.049986 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:48.059872 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:48.059993 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:48.068889 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:48.079048 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:48.079166 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:48.088092 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.098007 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:48.098121 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.107267 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:48.117920 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:48.118032 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:48.127917 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:48.180767 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:48.180909 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:48.290339 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:48.290624 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:48.290676 2444942 kubeadm.go:319] OS: Linux
	I0110 02:31:48.290728 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:48.290780 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:48.290831 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:48.290894 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:48.290946 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:48.291013 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:48.291064 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:48.291119 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:48.291170 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:48.376921 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:48.377171 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:48.377352 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:48.409493 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:48.416465 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:48.416688 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:48.416848 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:48.613948 2444942 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:49.073506 2444942 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:49.428686 2444942 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:49.712507 2444942 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:49.836655 2444942 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:49.837353 2444942 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.119233 2444942 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:50.120016 2444942 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.479427 2444942 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:50.633494 2444942 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:50.705818 2444942 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:50.706064 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:50.768089 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:50.918537 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:51.105411 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:51.794074 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:52.020214 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:52.020319 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:52.025960 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:52.029579 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:31:52.029696 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:52.030816 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:52.032102 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:52.049145 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:52.049263 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:52.057814 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:52.058122 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:52.058167 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:52.196343 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:52.196468 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:52.196251 2444942 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000438294s
	I0110 02:35:52.196284 2444942 kubeadm.go:319] 
	I0110 02:35:52.196342 2444942 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:52.196375 2444942 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:52.196480 2444942 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:52.196486 2444942 kubeadm.go:319] 
	I0110 02:35:52.196591 2444942 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:52.196622 2444942 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:52.196653 2444942 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:52.196658 2444942 kubeadm.go:319] 
	I0110 02:35:52.202848 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:52.203270 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:52.203377 2444942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:52.203640 2444942 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:35:52.203646 2444942 kubeadm.go:319] 
	I0110 02:35:52.203714 2444942 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:52.203844 2444942 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000438294s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000438294s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:52.203917 2444942 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:52.668064 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:52.684406 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:52.684471 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:52.694960 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:52.695030 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:52.695114 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:52.703880 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:52.703940 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:52.712165 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:52.721863 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:52.721985 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:52.731171 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.740287 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:52.740404 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.748618 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:52.757969 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:52.758029 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:52.766204 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:52.819064 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:52.819481 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:52.927559 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:52.927642 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:52.927679 2444942 kubeadm.go:319] OS: Linux
	I0110 02:35:52.927725 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:52.927773 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:52.927829 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:52.927879 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:52.927933 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:52.927982 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:52.928027 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:52.928076 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:52.928122 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:53.012278 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:53.012391 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:53.012483 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:53.037432 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:53.040921 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:53.041059 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:53.041136 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:53.041218 2444942 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:53.041284 2444942 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:53.041359 2444942 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:53.041417 2444942 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:53.041484 2444942 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:53.041550 2444942 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:53.041630 2444942 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:53.041707 2444942 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:53.041749 2444942 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:53.041814 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:53.331718 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:53.451638 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:53.804134 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:54.036793 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:54.605846 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:54.606454 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:54.608995 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:54.612162 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:35:54.612265 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:54.612343 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:54.612409 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:54.632870 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:54.633407 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:54.640913 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:54.641255 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:54.641302 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:54.777508 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:54.777628 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:39:54.778464 2444942 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115013s
	I0110 02:39:54.778491 2444942 kubeadm.go:319] 
	I0110 02:39:54.778555 2444942 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:39:54.778601 2444942 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:39:54.778725 2444942 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:39:54.778735 2444942 kubeadm.go:319] 
	I0110 02:39:54.778847 2444942 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:39:54.778883 2444942 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:39:54.778919 2444942 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:39:54.778927 2444942 kubeadm.go:319] 
	I0110 02:39:54.783246 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:39:54.783712 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:39:54.783842 2444942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:39:54.784133 2444942 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:39:54.784143 2444942 kubeadm.go:319] 
	I0110 02:39:54.784229 2444942 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:39:54.784293 2444942 kubeadm.go:403] duration metric: took 8m6.786690861s to StartCluster
	I0110 02:39:54.784334 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:39:54.784409 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:39:54.833811 2444942 cri.go:96] found id: ""
	I0110 02:39:54.833848 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.833857 2444942 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:39:54.833864 2444942 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:39:54.833927 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:39:54.874597 2444942 cri.go:96] found id: ""
	I0110 02:39:54.874676 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.874698 2444942 logs.go:284] No container was found matching "etcd"
	I0110 02:39:54.874717 2444942 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:39:54.874799 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:39:54.907340 2444942 cri.go:96] found id: ""
	I0110 02:39:54.907364 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.907372 2444942 logs.go:284] No container was found matching "coredns"
	I0110 02:39:54.907379 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:39:54.907439 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:39:54.942974 2444942 cri.go:96] found id: ""
	I0110 02:39:54.943001 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.943010 2444942 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:39:54.943018 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:39:54.943077 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:39:54.981427 2444942 cri.go:96] found id: ""
	I0110 02:39:54.981449 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.981458 2444942 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:39:54.981465 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:39:54.981531 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:39:55.041924 2444942 cri.go:96] found id: ""
	I0110 02:39:55.041946 2444942 logs.go:282] 0 containers: []
	W0110 02:39:55.041994 2444942 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:39:55.042004 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:39:55.042072 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:39:55.114566 2444942 cri.go:96] found id: ""
	I0110 02:39:55.114587 2444942 logs.go:282] 0 containers: []
	W0110 02:39:55.114596 2444942 logs.go:284] No container was found matching "kindnet"
	I0110 02:39:55.114606 2444942 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:39:55.114634 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:39:55.229791 2444942 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:55.208165    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.208559    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.217197    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.218001    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.222039    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:39:55.208165    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.208559    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.217197    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.218001    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.222039    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:39:55.229812 2444942 logs.go:123] Gathering logs for Docker ...
	I0110 02:39:55.229837 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 02:39:55.267290 2444942 logs.go:123] Gathering logs for container status ...
	I0110 02:39:55.267338 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:39:55.359988 2444942 logs.go:123] Gathering logs for kubelet ...
	I0110 02:39:55.360018 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:39:55.456371 2444942 logs.go:123] Gathering logs for dmesg ...
	I0110 02:39:55.456405 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0110 02:39:55.476932 2444942 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:39:55.476973 2444942 out.go:285] * 
	* 
	W0110 02:39:55.477022 2444942 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:55.477184 2444942 out.go:285] * 
	* 
	W0110 02:39:55.477459 2444942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:39:55.484573 2444942 out.go:203] 
	W0110 02:39:55.488432 2444942 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:55.488495 2444942 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:39:55.488519 2444942 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:39:55.491695 2444942 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-389625 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-389625 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 02:39:56.082127666 +0000 UTC m=+2780.290215509
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-389625
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-389625:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07",
	        "Created": "2026-01-10T02:31:36.748439063Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2446115,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:31:36.807266698Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07/hostname",
	        "HostsPath": "/var/lib/docker/containers/c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07/hosts",
	        "LogPath": "/var/lib/docker/containers/c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07/c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07-json.log",
	        "Name": "/force-systemd-flag-389625",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-389625:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-389625",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c756577a6f6a66791f7adeb9d3115dfe4eeccdd8300730bb86214a9483838d07",
	                "LowerDir": "/var/lib/docker/overlay2/c8ed87a7a55b3a8d25106eba6a27b7e72ac056695741710f830c1cf815bcbb12-init/diff:/var/lib/docker/overlay2/3279adf6388395c7fd34e962c09da15366b225a7b796d4f2275704eeca225de8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8ed87a7a55b3a8d25106eba6a27b7e72ac056695741710f830c1cf815bcbb12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8ed87a7a55b3a8d25106eba6a27b7e72ac056695741710f830c1cf815bcbb12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8ed87a7a55b3a8d25106eba6a27b7e72ac056695741710f830c1cf815bcbb12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-389625",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-389625/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-389625",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-389625",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-389625",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b17fe84301aa8a8771e83ad3aba6d5e0aa042020435b7617086a541938e24c45",
	            "SandboxKey": "/var/run/docker/netns/b17fe84301aa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34986"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34987"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34990"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34988"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34989"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-389625": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:4e:b7:f0:e8:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "05b624cbaecbfb4107db7d6109f5dec1fb867b1cd7ee4eb16a9bf04ca55ff1d2",
	                    "EndpointID": "3e47fb6f72bf1949091c2d646103f916b9d921452b0a2a0af24d1f8b7f86d253",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-389625",
	                        "c756577a6f6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-389625 -n force-systemd-flag-389625
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-389625 -n force-systemd-flag-389625: exit status 6 (477.481468ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:39:56.571329 2457811 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-389625" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-389625 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-818554 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat docker --no-pager                                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/docker/daemon.json                                                                             │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo docker system info                                                                                      │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cri-dockerd --version                                                                                   │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat containerd --no-pager                                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/containerd/config.toml                                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo containerd config dump                                                                                  │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat crio --no-pager                                                                           │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ delete  │ -p offline-docker-420658                                                                                                      │ offline-docker-420658     │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │ 10 Jan 26 02:31 UTC │
	│ ssh     │ -p cilium-818554 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo crio config                                                                                             │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ delete  │ -p cilium-818554                                                                                                              │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │ 10 Jan 26 02:31 UTC │
	│ start   │ -p force-systemd-env-405089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-405089  │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ start   │ -p force-systemd-flag-389625 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-389625 │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ force-systemd-env-405089 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-405089  │ jenkins │ v1.37.0 │ 10 Jan 26 02:39 UTC │ 10 Jan 26 02:39 UTC │
	│ ssh     │ force-systemd-flag-389625 ssh docker info --format {{.CgroupDriver}}                                                          │ force-systemd-flag-389625 │ jenkins │ v1.37.0 │ 10 Jan 26 02:39 UTC │ 10 Jan 26 02:39 UTC │
	│ delete  │ -p force-systemd-env-405089                                                                                                   │ force-systemd-env-405089  │ jenkins │ v1.37.0 │ 10 Jan 26 02:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:31:31
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:31:31.403273 2444942 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:31:31.403569 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403599 2444942 out.go:374] Setting ErrFile to fd 2...
	I0110 02:31:31.403618 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403919 2444942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:31:31.404424 2444942 out.go:368] Setting JSON to false
	I0110 02:31:31.405395 2444942 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36841,"bootTime":1767975451,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:31:31.405497 2444942 start.go:143] virtualization:  
	I0110 02:31:31.408819 2444942 out.go:179] * [force-systemd-flag-389625] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:31:31.412885 2444942 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:31:31.412964 2444942 notify.go:221] Checking for updates...
	I0110 02:31:31.425190 2444942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:31:31.428163 2444942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:31:31.431030 2444942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:31:31.433941 2444942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:31:31.436853 2444942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:31:31.440329 2444942 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:31.440445 2444942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:31:31.473277 2444942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:31:31.473389 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.569510 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.559356986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.569623 2444942 docker.go:319] overlay module found
	I0110 02:31:31.577195 2444942 out.go:179] * Using the docker driver based on user configuration
	I0110 02:31:31.580216 2444942 start.go:309] selected driver: docker
	I0110 02:31:31.580239 2444942 start.go:928] validating driver "docker" against <nil>
	I0110 02:31:31.580254 2444942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:31:31.580972 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.685470 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.673022095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.685622 2444942 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:31:31.685842 2444942 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:31:31.695072 2444942 out.go:179] * Using Docker driver with root privileges
	I0110 02:31:31.704472 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:31.704566 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:31.704582 2444942 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 02:31:31.704671 2444942 start.go:353] cluster config:
	{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:31.716615 2444942 out.go:179] * Starting "force-systemd-flag-389625" primary control-plane node in "force-systemd-flag-389625" cluster
	I0110 02:31:31.725232 2444942 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 02:31:31.731542 2444942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:31:31.734740 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:31.734792 2444942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 02:31:31.734803 2444942 cache.go:65] Caching tarball of preloaded images
	I0110 02:31:31.734922 2444942 preload.go:251] Found /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 02:31:31.734933 2444942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 02:31:31.735052 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:31.735070 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json: {Name:mkf231dfddb62b8df14c42136e70d1c72c396e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:31.735223 2444942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:31:31.768290 2444942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:31:31.768314 2444942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:31:31.768329 2444942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:31:31.768360 2444942 start.go:360] acquireMachinesLock for force-systemd-flag-389625: {Name:mkda4641748142b11aadec6867161d872c9610a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:31:31.768468 2444942 start.go:364] duration metric: took 88.236µs to acquireMachinesLock for "force-systemd-flag-389625"
	I0110 02:31:31.768503 2444942 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 02:31:31.768575 2444942 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:31:29.585409 2444124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:29.585634 2444124 start.go:159] libmachine.API.Create for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:29.585669 2444124 client.go:173] LocalClient.Create starting
	I0110 02:31:29.585728 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:29.585764 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585784 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.585842 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:29.585863 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585883 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.586231 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:29.610121 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:29.610191 2444124 network_create.go:284] running [docker network inspect force-systemd-env-405089] to gather additional debugging logs...
	I0110 02:31:29.610221 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089
	W0110 02:31:29.644159 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 returned with exit code 1
	I0110 02:31:29.644186 2444124 network_create.go:287] error running [docker network inspect force-systemd-env-405089]: docker network inspect force-systemd-env-405089: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-405089 not found
	I0110 02:31:29.644198 2444124 network_create.go:289] output of [docker network inspect force-systemd-env-405089]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-405089 not found
	
	** /stderr **
	I0110 02:31:29.644302 2444124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:29.676112 2444124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:29.676635 2444124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:29.676947 2444124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:29.677429 2444124 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6c9be719dc1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:8d:64:6b:58:be} reservation:<nil>}
	I0110 02:31:29.678964 2444124 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5e090}
	I0110 02:31:29.679020 2444124 network_create.go:124] attempt to create docker network force-systemd-env-405089 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:31:29.679130 2444124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-405089 force-systemd-env-405089
	I0110 02:31:29.775823 2444124 network_create.go:108] docker network force-systemd-env-405089 192.168.85.0/24 created
	I0110 02:31:29.775860 2444124 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-405089" container
	I0110 02:31:29.775934 2444124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:29.794158 2444124 cli_runner.go:164] Run: docker volume create force-systemd-env-405089 --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:29.812548 2444124 oci.go:103] Successfully created a docker volume force-systemd-env-405089
	I0110 02:31:29.812646 2444124 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-405089-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --entrypoint /usr/bin/test -v force-systemd-env-405089:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:30.375187 2444124 oci.go:107] Successfully prepared a docker volume force-systemd-env-405089
	I0110 02:31:30.375254 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:30.375264 2444124 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:30.375340 2444124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:33.218633 2444124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.843257443s)
	I0110 02:31:33.218668 2444124 kic.go:203] duration metric: took 2.843399774s to extract preloaded images to volume ...
	W0110 02:31:33.218794 2444124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:33.218913 2444124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:33.308593 2444124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-405089 --name force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-405089 --network force-systemd-env-405089 --ip 192.168.85.2 --volume force-systemd-env-405089:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:33.809884 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Running}}
	I0110 02:31:33.863227 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:33.899538 2444124 cli_runner.go:164] Run: docker exec force-systemd-env-405089 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:33.973139 2444124 oci.go:144] the created container "force-systemd-env-405089" has a running status.
	I0110 02:31:33.973175 2444124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa...
	I0110 02:31:34.190131 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:34.190189 2444124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:31.770687 2444942 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:31.770958 2444942 start.go:159] libmachine.API.Create for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:31.770996 2444942 client.go:173] LocalClient.Create starting
	I0110 02:31:31.771061 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:31.771107 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771131 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771194 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:31.771216 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771231 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771599 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:31.789231 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:31.789311 2444942 network_create.go:284] running [docker network inspect force-systemd-flag-389625] to gather additional debugging logs...
	I0110 02:31:31.789330 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625
	W0110 02:31:31.804491 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 returned with exit code 1
	I0110 02:31:31.804519 2444942 network_create.go:287] error running [docker network inspect force-systemd-flag-389625]: docker network inspect force-systemd-flag-389625: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-389625 not found
	I0110 02:31:31.804531 2444942 network_create.go:289] output of [docker network inspect force-systemd-flag-389625]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-389625 not found
	
	** /stderr **
	I0110 02:31:31.804633 2444942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:31.821447 2444942 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:31.821788 2444942 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:31.822120 2444942 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:31.822532 2444942 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977430}
	I0110 02:31:31.822549 2444942 network_create.go:124] attempt to create docker network force-systemd-flag-389625 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:31:31.822614 2444942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-389625 force-systemd-flag-389625
	I0110 02:31:31.879729 2444942 network_create.go:108] docker network force-systemd-flag-389625 192.168.76.0/24 created
	I0110 02:31:31.879758 2444942 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-389625" container
	I0110 02:31:31.879830 2444942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:31.907715 2444942 cli_runner.go:164] Run: docker volume create force-systemd-flag-389625 --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:31.939677 2444942 oci.go:103] Successfully created a docker volume force-systemd-flag-389625
	I0110 02:31:31.939777 2444942 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:33.763406 2444942 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (1.823586252s)
	I0110 02:31:33.763439 2444942 oci.go:107] Successfully prepared a docker volume force-systemd-flag-389625
	I0110 02:31:33.763488 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:33.763505 2444942 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:33.763585 2444942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:34.225021 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.249136 2444124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:34.249157 2444124 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-405089 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:34.332008 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.365192 2444124 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:34.365297 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:34.393974 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:34.394308 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:34.394318 2444124 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:34.394993 2444124 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33838->127.0.0.1:34981: read: connection reset by peer
	I0110 02:31:37.568807 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.568832 2444124 ubuntu.go:182] provisioning hostname "force-systemd-env-405089"
	I0110 02:31:37.568912 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.588249 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.588558 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.588583 2444124 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-405089 && echo "force-systemd-env-405089" | sudo tee /etc/hostname
	I0110 02:31:37.746546 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.746628 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.767459 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.767772 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.767794 2444124 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-405089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-405089/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-405089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:37.917803 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:37.917839 2444124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:37.917867 2444124 ubuntu.go:190] setting up certificates
	I0110 02:31:37.917878 2444124 provision.go:84] configureAuth start
	I0110 02:31:37.917939 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:37.936050 2444124 provision.go:143] copyHostCerts
	I0110 02:31:37.936093 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936126 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:37.936143 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936221 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:37.936318 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936341 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:37.936350 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936386 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:37.936442 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936463 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:37.936471 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936496 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:37.936548 2444124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-405089 san=[127.0.0.1 192.168.85.2 force-systemd-env-405089 localhost minikube]
	I0110 02:31:38.258206 2444124 provision.go:177] copyRemoteCerts
	I0110 02:31:38.258288 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:38.258339 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.276203 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:38.381656 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:38.381728 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:38.400027 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:38.400088 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0110 02:31:38.417556 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:38.417620 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:38.438553 2444124 provision.go:87] duration metric: took 520.648879ms to configureAuth
	I0110 02:31:38.438640 2444124 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:38.438850 2444124 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:38.438923 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.456723 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.457166 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.457186 2444124 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:38.623956 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:38.623983 2444124 ubuntu.go:71] root file system type: overlay
	I0110 02:31:38.624112 2444124 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:38.624190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.651894 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.652212 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.652296 2444124 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:38.832340 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:38.832516 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.851001 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.851318 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.851335 2444124 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:36.676943 2444942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.913316987s)
	I0110 02:31:36.676976 2444942 kic.go:203] duration metric: took 2.913468033s to extract preloaded images to volume ...
	W0110 02:31:36.677157 2444942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:36.677267 2444942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:36.733133 2444942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-389625 --name force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-389625 --network force-systemd-flag-389625 --ip 192.168.76.2 --volume force-systemd-flag-389625:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:37.020083 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Running}}
	I0110 02:31:37.049554 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.073410 2444942 cli_runner.go:164] Run: docker exec force-systemd-flag-389625 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:37.123872 2444942 oci.go:144] the created container "force-systemd-flag-389625" has a running status.
	I0110 02:31:37.123914 2444942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa...
	I0110 02:31:37.219546 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:37.219643 2444942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:37.246178 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.265663 2444942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:37.265687 2444942 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-389625 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:37.315490 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.344025 2444942 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:37.344113 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:37.365329 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.366213 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:37.366237 2444942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:37.366917 2444942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:31:40.525424 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.525452 2444942 ubuntu.go:182] provisioning hostname "force-systemd-flag-389625"
	I0110 02:31:40.525529 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.550883 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.551514 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.551534 2444942 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-389625 && echo "force-systemd-flag-389625" | sudo tee /etc/hostname
	I0110 02:31:40.741599 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.741787 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.769891 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.770349 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.770376 2444942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-389625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-389625/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-389625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:40.933268 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:40.933300 2444942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:40.933334 2444942 ubuntu.go:190] setting up certificates
	I0110 02:31:40.933344 2444942 provision.go:84] configureAuth start
	I0110 02:31:40.933425 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:40.954041 2444942 provision.go:143] copyHostCerts
	I0110 02:31:40.954074 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954109 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:40.954115 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954187 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:40.954287 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954306 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:40.954311 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954348 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:40.954426 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954443 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:40.954447 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954472 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:40.954527 2444942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-389625 san=[127.0.0.1 192.168.76.2 force-systemd-flag-389625 localhost minikube]
	I0110 02:31:41.170708 2444942 provision.go:177] copyRemoteCerts
	I0110 02:31:41.170784 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:41.170832 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.191286 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:41.302379 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:41.302491 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:31:41.325187 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:41.325316 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:41.349568 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:41.349680 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:41.371181 2444942 provision.go:87] duration metric: took 437.80859ms to configureAuth
	I0110 02:31:41.371265 2444942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:41.371507 2444942 config.go:182] Loaded profile config "force-systemd-flag-389625": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:41.371603 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.397226 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.397537 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.397547 2444942 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:39.848898 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:38.826649162 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:39.848925 2444124 machine.go:97] duration metric: took 5.483705976s to provisionDockerMachine
	I0110 02:31:39.848938 2444124 client.go:176] duration metric: took 10.263257466s to LocalClient.Create
	I0110 02:31:39.848983 2444124 start.go:167] duration metric: took 10.263350347s to libmachine.API.Create "force-systemd-env-405089"
	I0110 02:31:39.848999 2444124 start.go:293] postStartSetup for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:39.849010 2444124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:39.849143 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:39.849190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:39.867772 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:39.969324 2444124 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:39.972690 2444124 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:39.972719 2444124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:39.972731 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:39.972810 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:39.972927 2444124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:39.972937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:39.973066 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:39.981882 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:40.000017 2444124 start.go:296] duration metric: took 151.001946ms for postStartSetup
	I0110 02:31:40.000404 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.038533 2444124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/config.json ...
	I0110 02:31:40.038894 2444124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:40.038954 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.057310 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.158291 2444124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:40.163210 2444124 start.go:128] duration metric: took 10.581191023s to createHost
	I0110 02:31:40.163237 2444124 start.go:83] releasing machines lock for "force-systemd-env-405089", held for 10.581321237s
	I0110 02:31:40.163309 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.180948 2444124 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:40.181013 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.181219 2444124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:40.181281 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.201769 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.209162 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.305222 2444124 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:40.413487 2444124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:40.417954 2444124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:40.418043 2444124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:40.452568 2444124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:40.452596 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.452613 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.452712 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:40.470526 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:40.482803 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:40.492346 2444124 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:40.492457 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:40.502450 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.511527 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:40.520445 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.540761 2444124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:40.551850 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:40.563654 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:40.574796 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:40.585488 2444124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:40.595355 2444124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:40.609803 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:40.744003 2444124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:40.876690 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.876724 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.876779 2444124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:40.904144 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.918264 2444124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:40.953661 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.974405 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:40.989753 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:41.013689 2444124 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:41.018251 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:41.027476 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:41.042305 2444124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:41.204191 2444124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:41.332172 2444124 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:41.332275 2444124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:41.346373 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:41.360708 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:41.514222 2444124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:42.063171 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:42.079374 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:42.101258 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.121381 2444124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:42.317076 2444124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:42.488596 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.651589 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:42.669531 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:42.687478 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.822336 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:42.917629 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.937980 2444124 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:42.938103 2444124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:42.943727 2444124 start.go:574] Will wait 60s for crictl version
	I0110 02:31:42.943794 2444124 ssh_runner.go:195] Run: which crictl
	I0110 02:31:42.948403 2444124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:42.980867 2444124 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:42.980939 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.004114 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.042145 2444124 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:43.042280 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:43.065360 2444124 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:43.069214 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.081864 2444124 kubeadm.go:884] updating cluster {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:43.081980 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:43.082036 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.100460 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.100486 2444124 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:43.100552 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.121230 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.121256 2444124 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:43.121266 2444124 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:43.121361 2444124 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-405089 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:43.121432 2444124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:43.178538 2444124 cni.go:84] Creating CNI manager for ""
	I0110 02:31:43.178570 2444124 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:43.178597 2444124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:43.178618 2444124 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-405089 NodeName:force-systemd-env-405089 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:43.178739 2444124 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-405089"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:43.178809 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:43.186967 2444124 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:43.187037 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:43.196792 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0110 02:31:43.210260 2444124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:43.225215 2444124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:31:43.239490 2444124 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:43.243821 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.256336 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:43.411763 2444124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:43.449071 2444124 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089 for IP: 192.168.85.2
	I0110 02:31:43.449090 2444124 certs.go:195] generating shared ca certs ...
	I0110 02:31:43.449107 2444124 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:43.449242 2444124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:43.449285 2444124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:43.449293 2444124 certs.go:257] generating profile certs ...
	I0110 02:31:43.449348 2444124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key
	I0110 02:31:43.449359 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt with IP's: []
	I0110 02:31:44.085771 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt ...
	I0110 02:31:44.085806 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt: {Name:mkef9124ceed79304369528c5a27c7648b78a9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086085 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key ...
	I0110 02:31:44.086119 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key: {Name:mk76e85724a13af463ddacfcf286ac686d149ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086302 2444124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b
	I0110 02:31:44.086324 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:31:44.498570 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b ...
	I0110 02:31:44.498600 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b: {Name:mk4617469fd5fea335a0e87bd3a6539b7da9cd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498789 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b ...
	I0110 02:31:44.498804 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b: {Name:mkb480b1d60b5ebb03b826d7d02dfd7e44510312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498902 2444124 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt
	I0110 02:31:44.498990 2444124 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key
	I0110 02:31:44.499054 2444124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key
	I0110 02:31:44.499073 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt with IP's: []
	I0110 02:31:44.994504 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt ...
	I0110 02:31:44.994541 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt: {Name:mkcf4f6fccba9f412afa8632ad4d0d2e51e05241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995667 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key ...
	I0110 02:31:44.995695 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key: {Name:mka4f25cc07eefbd88194e70f96e9c6a66c304c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995867 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:44.995917 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:44.995937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:44.995956 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:44.995969 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:44.996009 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:44.996029 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:44.996041 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:44.996117 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:44.996176 2444124 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:44.996191 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:44.996234 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:44.996282 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:44.996317 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:44.996397 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:44.996456 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:44.996487 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:44.996506 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:44.997128 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:45.025285 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:45.117344 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:45.154410 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:45.182384 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:45.209059 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:31:45.237573 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:45.267287 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:45.300586 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:45.325792 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:45.348897 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:45.369906 2444124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:45.388123 2444124 ssh_runner.go:195] Run: openssl version
	I0110 02:31:45.396122 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.406059 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:45.416032 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424294 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424422 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.470580 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.479156 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.487771 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.496463 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:45.504778 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510041 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510161 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.562186 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:45.570693 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:45.585612 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.595614 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:45.604471 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609923 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609994 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.652960 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:45.665449 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:45.674251 2444124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:45.678274 2444124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:45.678327 2444124 kubeadm.go:401] StartCluster: {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:45.678449 2444124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:45.696897 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:45.715111 2444124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:45.737562 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:45.737624 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:45.753944 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:45.754029 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:45.754124 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:45.767074 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:45.767192 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:45.775076 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:45.783839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:45.783955 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:45.791557 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.800017 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:45.800155 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.807549 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:45.815839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:45.815967 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:45.823557 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:45.879151 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:45.879584 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:45.979502 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:45.979676 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:45.979749 2444124 kubeadm.go:319] OS: Linux
	I0110 02:31:45.979833 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:45.979918 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:45.979997 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:45.980082 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:45.980163 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:45.980247 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:45.980325 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:45.980413 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:45.980515 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:46.069019 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:46.069217 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:46.069354 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:46.086323 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:41.564217 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:41.564316 2444942 ubuntu.go:71] root file system type: overlay
	I0110 02:31:41.564502 2444942 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:41.564636 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.591765 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.592086 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.592175 2444942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:41.761531 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:41.761616 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.782449 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.782827 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.782851 2444942 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:43.042474 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:41.754593192 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:43.042496 2444942 machine.go:97] duration metric: took 5.698448584s to provisionDockerMachine
	I0110 02:31:43.042508 2444942 client.go:176] duration metric: took 11.271502022s to LocalClient.Create
	I0110 02:31:43.042522 2444942 start.go:167] duration metric: took 11.271565709s to libmachine.API.Create "force-systemd-flag-389625"
	I0110 02:31:43.042529 2444942 start.go:293] postStartSetup for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:43.042539 2444942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:43.042594 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:43.042629 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.076614 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.196482 2444942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:43.201700 2444942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:43.201726 2444942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:43.201737 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:43.201796 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:43.201877 2444942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:43.201885 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:43.201986 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:43.214196 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:43.241904 2444942 start.go:296] duration metric: took 199.360809ms for postStartSetup
	I0110 02:31:43.242273 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.263273 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:43.263543 2444942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:43.263584 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.283380 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.391153 2444942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:43.396781 2444942 start.go:128] duration metric: took 11.628189455s to createHost
	I0110 02:31:43.396804 2444942 start.go:83] releasing machines lock for "force-systemd-flag-389625", held for 11.628322055s
	I0110 02:31:43.396875 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.415596 2444942 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:43.415661 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.415925 2444942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:43.415983 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.442514 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.477676 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.711077 2444942 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:43.721326 2444942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:43.726734 2444942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:43.726807 2444942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:43.760612 2444942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:43.760636 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:43.760650 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:43.760747 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:43.776486 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:43.785831 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:43.795047 2444942 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:43.795106 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:43.804716 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.814084 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:43.823155 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.832515 2444942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:43.841283 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:43.850677 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:43.859949 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:43.869426 2444942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:43.878026 2444942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:43.886454 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.030564 2444942 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:44.134281 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:44.134314 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:44.134390 2444942 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:44.164357 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.178141 2444942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:44.203502 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.225293 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:44.259875 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:44.298197 2444942 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:44.302282 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:44.310035 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:44.323184 2444942 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:44.479958 2444942 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:44.628745 2444942 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:44.628855 2444942 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:44.646424 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:44.659407 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.806969 2444942 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:45.429132 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:45.449741 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:45.466128 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:45.483936 2444942 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:45.652722 2444942 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:45.851372 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.020791 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:46.040175 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:46.054245 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.202922 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:46.282568 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:46.299250 2444942 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:46.299324 2444942 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:46.304150 2444942 start.go:574] Will wait 60s for crictl version
	I0110 02:31:46.304219 2444942 ssh_runner.go:195] Run: which crictl
	I0110 02:31:46.309882 2444942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:46.365333 2444942 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:46.365407 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.397294 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.430776 2444942 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:46.430856 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:46.446745 2444942 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:46.450899 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.460438 2444942 kubeadm.go:884] updating cluster {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:46.460546 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:46.460598 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.482795 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.482816 2444942 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:46.482894 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.503709 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.503732 2444942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:46.503741 2444942 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:46.503828 2444942 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-389625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:46.503890 2444942 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:46.568277 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:46.568357 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:46.568393 2444942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:46.568445 2444942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-389625 NodeName:force-systemd-flag-389625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:46.568620 2444942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-389625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:46.568728 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:46.576738 2444942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:46.576804 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:46.584333 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0110 02:31:46.597086 2444942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:46.609903 2444942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0110 02:31:46.623198 2444942 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:46.627340 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.637410 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.813351 2444942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:46.853529 2444942 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625 for IP: 192.168.76.2
	I0110 02:31:46.853605 2444942 certs.go:195] generating shared ca certs ...
	I0110 02:31:46.853636 2444942 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.853847 2444942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:46.853930 2444942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:46.853957 2444942 certs.go:257] generating profile certs ...
	I0110 02:31:46.854046 2444942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key
	I0110 02:31:46.854089 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt with IP's: []
	I0110 02:31:46.947349 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt ...
	I0110 02:31:46.947424 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt: {Name:mkc2a0e18aeb9bc161a2b7bdc69edce7c225059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947656 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key ...
	I0110 02:31:46.947692 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key: {Name:mkbec37be7fe98f01eeac1efcff3341ee3c0872e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947838 2444942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11
	I0110 02:31:46.947881 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:31:47.211172 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 ...
	I0110 02:31:47.211243 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11: {Name:mkb26b4fa8a855d6ab75cf6ae5986179421e433d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211463 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 ...
	I0110 02:31:47.211500 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11: {Name:mkaede7629652a36b550448eb511dc667db770a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211648 2444942 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt
	I0110 02:31:47.211795 2444942 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key
	I0110 02:31:47.211904 2444942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key
	I0110 02:31:47.211947 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt with IP's: []
	I0110 02:31:47.431675 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt ...
	I0110 02:31:47.431751 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt: {Name:mkf0c56bc6a962d35ef411e8b1db0da0dee06e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.431961 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key ...
	I0110 02:31:47.431997 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key: {Name:mk1b1a2249d88d087b490ca8bc1af9bab6c5cd65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.432136 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:47.432180 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:47.432212 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:47.432258 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:47.432293 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:47.432322 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:47.432364 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:47.432398 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:47.432482 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:47.432539 2444942 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:47.432564 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:47.432623 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:47.432673 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:47.432730 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:47.432801 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:47.432861 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.432896 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.432926 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.433610 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:47.453555 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:47.472772 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:47.493487 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:47.513383 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:47.534626 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:31:47.554446 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:47.574178 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:47.594420 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:47.614798 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:47.635266 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:47.655406 2444942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:47.670021 2444942 ssh_runner.go:195] Run: openssl version
	I0110 02:31:47.676614 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.684815 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:47.693216 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697583 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697646 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.771210 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:47.792458 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:47.806445 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.828400 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:47.841461 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847202 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847317 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.889947 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:47.898442 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:47.910391 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.918871 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:47.928363 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932866 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932981 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.975611 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.984122 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.992727 2444942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:47.997508 2444942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:47.997608 2444942 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:47.997780 2444942 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:48.015607 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:48.027609 2444942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:48.037195 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:48.037364 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:48.049830 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:48.049901 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:48.049986 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:48.059872 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:48.059993 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:48.068889 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:48.079048 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:48.079166 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:48.088092 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.098007 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:48.098121 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.107267 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:48.117920 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:48.118032 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:48.127917 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:48.180767 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:48.180909 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:48.290339 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:48.290624 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:48.290676 2444942 kubeadm.go:319] OS: Linux
	I0110 02:31:48.290728 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:48.290780 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:48.290831 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:48.290894 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:48.290946 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:48.291013 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:48.291064 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:48.291119 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:48.291170 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:48.376921 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:48.377171 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:48.377352 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:48.409493 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:46.092497 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:46.092669 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:46.092770 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:46.875771 2444124 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:47.144364 2444124 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:47.314724 2444124 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:47.984584 2444124 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:48.242134 2444124 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:48.242499 2444124 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.461465 2444124 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:48.461631 2444124 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.733504 2444124 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:48.861496 2444124 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:49.185510 2444124 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:49.185598 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:49.425584 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:49.777471 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:49.961468 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:50.177454 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:50.374241 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:50.374970 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:50.381331 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:48.416465 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:48.416688 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:48.416848 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:48.613948 2444942 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:49.073506 2444942 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:49.428686 2444942 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:49.712507 2444942 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:49.836655 2444942 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:49.837353 2444942 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.119233 2444942 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:50.120016 2444942 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.479427 2444942 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:50.633494 2444942 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:50.705818 2444942 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:50.706064 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:50.768089 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:50.918537 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:51.105411 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:51.794074 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:52.020214 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:52.020319 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:52.025960 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:50.384846 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:31:50.384957 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:50.385056 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:50.385129 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:50.414088 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:50.414228 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:50.422787 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:50.423116 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:50.423172 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:50.599415 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:50.599570 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:31:52.029579 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:31:52.029696 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:52.030816 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:52.032102 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:52.049145 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:52.049263 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:52.057814 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:52.058122 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:52.058167 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:52.196343 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:52.196468 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:50.600577 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001263977s
	I0110 02:35:50.601197 2444124 kubeadm.go:319] 
	I0110 02:35:50.601279 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:50.601345 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:50.601480 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:50.601496 2444124 kubeadm.go:319] 
	I0110 02:35:50.601596 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:50.601630 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:50.601664 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:50.601672 2444124 kubeadm.go:319] 
	I0110 02:35:50.606506 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:50.606929 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:50.607043 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:50.607291 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:35:50.607301 2444124 kubeadm.go:319] 
	I0110 02:35:50.607370 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:50.607511 2444124 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001263977s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:50.607594 2444124 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:51.030219 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:51.043577 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:51.043642 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:51.051651 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:51.051673 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:51.051734 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:51.059812 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:51.059882 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:51.068320 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:51.076706 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:51.076822 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:51.084858 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.093615 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:51.093686 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.101862 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:51.110328 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:51.110395 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:51.118285 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:51.161915 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:51.161979 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:51.247247 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:51.247324 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:51.247366 2444124 kubeadm.go:319] OS: Linux
	I0110 02:35:51.247418 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:51.247473 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:51.247523 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:51.247577 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:51.247629 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:51.247681 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:51.247730 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:51.247783 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:51.247850 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:51.316861 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:51.316975 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:51.317095 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:51.330675 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:51.336295 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:51.336385 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:51.336458 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:51.336535 2444124 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:51.336596 2444124 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:51.336666 2444124 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:51.336720 2444124 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:51.336783 2444124 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:51.336844 2444124 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:51.336918 2444124 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:51.336991 2444124 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:51.337028 2444124 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:51.337115 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:51.445329 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:51.773916 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:51.845501 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:52.201867 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:52.810005 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:52.810953 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:52.813391 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:52.196251 2444942 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000438294s
	I0110 02:35:52.196284 2444942 kubeadm.go:319] 
	I0110 02:35:52.196342 2444942 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:52.196375 2444942 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:52.196480 2444942 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:52.196486 2444942 kubeadm.go:319] 
	I0110 02:35:52.196591 2444942 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:52.196622 2444942 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:52.196653 2444942 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:52.196658 2444942 kubeadm.go:319] 
	I0110 02:35:52.202848 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:52.203270 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:52.203377 2444942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:52.203640 2444942 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:35:52.203646 2444942 kubeadm.go:319] 
	I0110 02:35:52.203714 2444942 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:52.203844 2444942 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000438294s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:52.203917 2444942 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:52.668064 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:52.684406 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:52.684471 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:52.694960 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:52.695030 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:52.695114 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:52.703880 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:52.703940 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:52.712165 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:52.721863 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:52.721985 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:52.731171 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.740287 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:52.740404 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.748618 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:52.757969 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:52.758029 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:52.766204 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:52.819064 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:52.819481 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:52.927559 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:52.927642 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:52.927679 2444942 kubeadm.go:319] OS: Linux
	I0110 02:35:52.927725 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:52.927773 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:52.927829 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:52.927879 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:52.927933 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:52.927982 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:52.928027 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:52.928076 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:52.928122 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:53.012278 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:53.012391 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:53.012483 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:53.037432 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:52.816841 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:35:52.816944 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:52.817023 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:52.828369 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:52.849764 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:52.849875 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:52.858304 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:52.858625 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:52.858672 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:53.019244 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:53.019363 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:53.040921 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:53.041059 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:53.041136 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:53.041218 2444942 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:53.041284 2444942 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:53.041359 2444942 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:53.041417 2444942 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:53.041484 2444942 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:53.041550 2444942 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:53.041630 2444942 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:53.041707 2444942 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:53.041749 2444942 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:53.041814 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:53.331718 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:53.451638 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:53.804134 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:54.036793 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:54.605846 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:54.606454 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:54.608995 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:54.612162 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:35:54.612265 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:54.612343 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:54.612409 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:54.632870 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:54.633407 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:54.640913 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:54.641255 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:54.641302 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:54.777508 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:54.777628 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:39:53.016701 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000239671s
	I0110 02:39:53.016728 2444124 kubeadm.go:319] 
	I0110 02:39:53.016782 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:39:53.016814 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:39:53.016913 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:39:53.016917 2444124 kubeadm.go:319] 
	I0110 02:39:53.017016 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:39:53.017069 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:39:53.017100 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:39:53.017110 2444124 kubeadm.go:319] 
	I0110 02:39:53.026674 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:39:53.027207 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:39:53.027347 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:39:53.027605 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:39:53.027624 2444124 kubeadm.go:319] 
	I0110 02:39:53.027707 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:39:53.027777 2444124 kubeadm.go:403] duration metric: took 8m7.349453429s to StartCluster
	I0110 02:39:53.027818 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:39:53.027886 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:39:53.065190 2444124 cri.go:96] found id: ""
	I0110 02:39:53.065233 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.065243 2444124 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:39:53.065251 2444124 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:39:53.065314 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:39:53.090958 2444124 cri.go:96] found id: ""
	I0110 02:39:53.090984 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.090993 2444124 logs.go:284] No container was found matching "etcd"
	I0110 02:39:53.091000 2444124 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:39:53.091077 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:39:53.117931 2444124 cri.go:96] found id: ""
	I0110 02:39:53.117955 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.117964 2444124 logs.go:284] No container was found matching "coredns"
	I0110 02:39:53.117972 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:39:53.118031 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:39:53.143724 2444124 cri.go:96] found id: ""
	I0110 02:39:53.143749 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.143757 2444124 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:39:53.143764 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:39:53.143823 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:39:53.168452 2444124 cri.go:96] found id: ""
	I0110 02:39:53.168477 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.168486 2444124 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:39:53.168492 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:39:53.168550 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:39:53.194925 2444124 cri.go:96] found id: ""
	I0110 02:39:53.194960 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.194969 2444124 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:39:53.194976 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:39:53.195047 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:39:53.220058 2444124 cri.go:96] found id: ""
	I0110 02:39:53.220083 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.220100 2444124 logs.go:284] No container was found matching "kindnet"
	I0110 02:39:53.220110 2444124 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:39:53.220122 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:39:53.285618 2444124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:39:53.285639 2444124 logs.go:123] Gathering logs for Docker ...
	I0110 02:39:53.285650 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 02:39:53.308836 2444124 logs.go:123] Gathering logs for container status ...
	I0110 02:39:53.308869 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:39:53.341659 2444124 logs.go:123] Gathering logs for kubelet ...
	I0110 02:39:53.341684 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:39:53.401462 2444124 logs.go:123] Gathering logs for dmesg ...
	I0110 02:39:53.401506 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0110 02:39:53.419441 2444124 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:39:53.419490 2444124 out.go:285] * 
	W0110 02:39:53.419567 2444124 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.419587 2444124 out.go:285] * 
	W0110 02:39:53.419862 2444124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:39:53.424999 2444124 out.go:203] 
	W0110 02:39:53.428767 2444124 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.428834 2444124 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:39:53.428862 2444124 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:39:53.431997 2444124 out.go:203] 
	I0110 02:39:54.778464 2444942 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001115013s
	I0110 02:39:54.778491 2444942 kubeadm.go:319] 
	I0110 02:39:54.778555 2444942 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:39:54.778601 2444942 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:39:54.778725 2444942 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:39:54.778735 2444942 kubeadm.go:319] 
	I0110 02:39:54.778847 2444942 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:39:54.778883 2444942 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:39:54.778919 2444942 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:39:54.778927 2444942 kubeadm.go:319] 
	I0110 02:39:54.783246 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:39:54.783712 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:39:54.783842 2444942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:39:54.784133 2444942 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:39:54.784143 2444942 kubeadm.go:319] 
	I0110 02:39:54.784229 2444942 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:39:54.784293 2444942 kubeadm.go:403] duration metric: took 8m6.786690861s to StartCluster
	I0110 02:39:54.784334 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:39:54.784409 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:39:54.833811 2444942 cri.go:96] found id: ""
	I0110 02:39:54.833848 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.833857 2444942 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:39:54.833864 2444942 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:39:54.833927 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:39:54.874597 2444942 cri.go:96] found id: ""
	I0110 02:39:54.874676 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.874698 2444942 logs.go:284] No container was found matching "etcd"
	I0110 02:39:54.874717 2444942 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:39:54.874799 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:39:54.907340 2444942 cri.go:96] found id: ""
	I0110 02:39:54.907364 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.907372 2444942 logs.go:284] No container was found matching "coredns"
	I0110 02:39:54.907379 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:39:54.907439 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:39:54.942974 2444942 cri.go:96] found id: ""
	I0110 02:39:54.943001 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.943010 2444942 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:39:54.943018 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:39:54.943077 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:39:54.981427 2444942 cri.go:96] found id: ""
	I0110 02:39:54.981449 2444942 logs.go:282] 0 containers: []
	W0110 02:39:54.981458 2444942 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:39:54.981465 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:39:54.981531 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:39:55.041924 2444942 cri.go:96] found id: ""
	I0110 02:39:55.041946 2444942 logs.go:282] 0 containers: []
	W0110 02:39:55.041994 2444942 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:39:55.042004 2444942 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:39:55.042072 2444942 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:39:55.114566 2444942 cri.go:96] found id: ""
	I0110 02:39:55.114587 2444942 logs.go:282] 0 containers: []
	W0110 02:39:55.114596 2444942 logs.go:284] No container was found matching "kindnet"
	I0110 02:39:55.114606 2444942 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:39:55.114634 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:39:55.229791 2444942 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:55.208165    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.208559    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.217197    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.218001    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.222039    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:39:55.208165    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.208559    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.217197    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.218001    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:55.222039    5551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:39:55.229812 2444942 logs.go:123] Gathering logs for Docker ...
	I0110 02:39:55.229837 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 02:39:55.267290 2444942 logs.go:123] Gathering logs for container status ...
	I0110 02:39:55.267338 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:39:55.359988 2444942 logs.go:123] Gathering logs for kubelet ...
	I0110 02:39:55.360018 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:39:55.456371 2444942 logs.go:123] Gathering logs for dmesg ...
	I0110 02:39:55.456405 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0110 02:39:55.476932 2444942 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:39:55.476973 2444942 out.go:285] * 
	W0110 02:39:55.477022 2444942 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:55.477184 2444942 out.go:285] * 
	W0110 02:39:55.477459 2444942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:39:55.484573 2444942 out.go:203] 
	W0110 02:39:55.488432 2444942 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001115013s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:55.488495 2444942 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:39:55.488519 2444942 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:39:55.491695 2444942 out.go:203] 
	
	
	==> Docker <==
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.012742498Z" level=info msg="Restoring containers: start."
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.039653876Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.059486515Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.370327087Z" level=info msg="Loading containers: done."
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.394815390Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.395046844Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.395153262Z" level=info msg="Initializing buildkit"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.416439936Z" level=info msg="Completed buildkit initialization"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.425849266Z" level=info msg="Daemon has completed initialization"
	Jan 10 02:31:45 force-systemd-flag-389625 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.429143505Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.429362668Z" level=info msg="API listen on /run/docker.sock"
	Jan 10 02:31:45 force-systemd-flag-389625 dockerd[1145]: time="2026-01-10T02:31:45.429475954Z" level=info msg="API listen on [::]:2376"
	Jan 10 02:31:46 force-systemd-flag-389625 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Start docker client with request timeout 0s"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Loaded network plugin cni"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Setting cgroupDriver systemd"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 10 02:31:46 force-systemd-flag-389625 cri-dockerd[1430]: time="2026-01-10T02:31:46Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 10 02:31:46 force-systemd-flag-389625 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:57.254952    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:57.255524    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:57.257252    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:57.257716    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:57.259170    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 01:53] kauditd_printk_skb: 8 callbacks suppressed
	[Jan10 02:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:39:57 up 10:22,  0 user,  load average: 0.20, 0.85, 1.77
	Linux force-systemd-flag-389625 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 02:39:53 force-systemd-flag-389625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:54 force-systemd-flag-389625 kubelet[5480]: E0110 02:39:54.286012    5480 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:54 force-systemd-flag-389625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:55 force-systemd-flag-389625 kubelet[5528]: E0110 02:39:55.119544    5528 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:55 force-systemd-flag-389625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:55 force-systemd-flag-389625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:55 force-systemd-flag-389625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 02:39:55 force-systemd-flag-389625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:55 force-systemd-flag-389625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:56 force-systemd-flag-389625 kubelet[5581]: E0110 02:39:56.133774    5581 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:56 force-systemd-flag-389625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:56 force-systemd-flag-389625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:56 force-systemd-flag-389625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 10 02:39:56 force-systemd-flag-389625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:56 force-systemd-flag-389625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:57 force-systemd-flag-389625 kubelet[5628]: E0110 02:39:57.022536    5628 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:57 force-systemd-flag-389625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:57 force-systemd-flag-389625 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-389625 -n force-systemd-flag-389625
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-389625 -n force-systemd-flag-389625: exit status 6 (457.107211ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:39:58.090441 2458176 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-389625" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-389625" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-389625" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-389625
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-389625: (2.125299621s)
--- FAIL: TestForceSystemdFlag (508.91s)

                                                
                                    
x
+
TestForceSystemdEnv (508.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-405089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-405089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m24.323722094s)

                                                
                                                
-- stdout --
	* [force-systemd-env-405089] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-405089" primary control-plane node in "force-systemd-env-405089" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:31:29.194435 2444124 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:31:29.194649 2444124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:29.194671 2444124 out.go:374] Setting ErrFile to fd 2...
	I0110 02:31:29.194691 2444124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:29.194992 2444124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:31:29.195459 2444124 out.go:368] Setting JSON to false
	I0110 02:31:29.196443 2444124 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36839,"bootTime":1767975451,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:31:29.196541 2444124 start.go:143] virtualization:  
	I0110 02:31:29.201536 2444124 out.go:179] * [force-systemd-env-405089] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:31:29.204708 2444124 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:31:29.204786 2444124 notify.go:221] Checking for updates...
	I0110 02:31:29.211084 2444124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:31:29.214119 2444124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:31:29.217063 2444124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:31:29.219923 2444124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:31:29.222929 2444124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0110 02:31:29.226378 2444124 config.go:182] Loaded profile config "offline-docker-420658": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:29.226479 2444124 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:31:29.292344 2444124 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:31:29.292531 2444124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:29.427995 2444124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:31:29.413739838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:29.428093 2444124 docker.go:319] overlay module found
	I0110 02:31:29.431316 2444124 out.go:179] * Using the docker driver based on user configuration
	I0110 02:31:29.434209 2444124 start.go:309] selected driver: docker
	I0110 02:31:29.434231 2444124 start.go:928] validating driver "docker" against <nil>
	I0110 02:31:29.434253 2444124 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:31:29.434915 2444124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:29.533492 2444124 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:31:29.522281321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:29.533646 2444124 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:31:29.533864 2444124 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:31:29.536845 2444124 out.go:179] * Using Docker driver with root privileges
	I0110 02:31:29.539963 2444124 cni.go:84] Creating CNI manager for ""
	I0110 02:31:29.540051 2444124 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:29.540070 2444124 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 02:31:29.540160 2444124 start.go:353] cluster config:
	{Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:29.543235 2444124 out.go:179] * Starting "force-systemd-env-405089" primary control-plane node in "force-systemd-env-405089" cluster
	I0110 02:31:29.546210 2444124 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 02:31:29.549122 2444124 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:31:29.551969 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:29.552019 2444124 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 02:31:29.552030 2444124 cache.go:65] Caching tarball of preloaded images
	I0110 02:31:29.552129 2444124 preload.go:251] Found /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 02:31:29.552144 2444124 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 02:31:29.552245 2444124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/config.json ...
	I0110 02:31:29.552268 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/config.json: {Name:mkff40e56481cc76544b251da46242467cdd6cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:29.552426 2444124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:31:29.581708 2444124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:31:29.581734 2444124 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:31:29.581749 2444124 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:31:29.581791 2444124 start.go:360] acquireMachinesLock for force-systemd-env-405089: {Name:mkb60e9ce670cd7b26d1bc73996df7ef68c386f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:31:29.581901 2444124 start.go:364] duration metric: took 83.059µs to acquireMachinesLock for "force-systemd-env-405089"
	I0110 02:31:29.581932 2444124 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 02:31:29.582003 2444124 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:31:29.585409 2444124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:29.585634 2444124 start.go:159] libmachine.API.Create for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:29.585669 2444124 client.go:173] LocalClient.Create starting
	I0110 02:31:29.585728 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:29.585764 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585784 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.585842 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:29.585863 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585883 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.586231 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:29.610121 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:29.610191 2444124 network_create.go:284] running [docker network inspect force-systemd-env-405089] to gather additional debugging logs...
	I0110 02:31:29.610221 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089
	W0110 02:31:29.644159 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 returned with exit code 1
	I0110 02:31:29.644186 2444124 network_create.go:287] error running [docker network inspect force-systemd-env-405089]: docker network inspect force-systemd-env-405089: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-405089 not found
	I0110 02:31:29.644198 2444124 network_create.go:289] output of [docker network inspect force-systemd-env-405089]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-405089 not found
	
	** /stderr **
	I0110 02:31:29.644302 2444124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:29.676112 2444124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:29.676635 2444124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:29.676947 2444124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:29.677429 2444124 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6c9be719dc1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:8d:64:6b:58:be} reservation:<nil>}
	I0110 02:31:29.678964 2444124 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5e090}
	I0110 02:31:29.679020 2444124 network_create.go:124] attempt to create docker network force-systemd-env-405089 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:31:29.679130 2444124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-405089 force-systemd-env-405089
	I0110 02:31:29.775823 2444124 network_create.go:108] docker network force-systemd-env-405089 192.168.85.0/24 created
	I0110 02:31:29.775860 2444124 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-405089" container
	I0110 02:31:29.775934 2444124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:29.794158 2444124 cli_runner.go:164] Run: docker volume create force-systemd-env-405089 --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:29.812548 2444124 oci.go:103] Successfully created a docker volume force-systemd-env-405089
	I0110 02:31:29.812646 2444124 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-405089-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --entrypoint /usr/bin/test -v force-systemd-env-405089:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:30.375187 2444124 oci.go:107] Successfully prepared a docker volume force-systemd-env-405089
	I0110 02:31:30.375254 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:30.375264 2444124 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:30.375340 2444124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:33.218633 2444124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.843257443s)
	I0110 02:31:33.218668 2444124 kic.go:203] duration metric: took 2.843399774s to extract preloaded images to volume ...
	W0110 02:31:33.218794 2444124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:33.218913 2444124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:33.308593 2444124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-405089 --name force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-405089 --network force-systemd-env-405089 --ip 192.168.85.2 --volume force-systemd-env-405089:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:33.809884 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Running}}
	I0110 02:31:33.863227 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:33.899538 2444124 cli_runner.go:164] Run: docker exec force-systemd-env-405089 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:33.973139 2444124 oci.go:144] the created container "force-systemd-env-405089" has a running status.
	I0110 02:31:33.973175 2444124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa...
	I0110 02:31:34.190131 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:34.190189 2444124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:34.225021 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.249136 2444124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:34.249157 2444124 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-405089 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:34.332008 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.365192 2444124 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:34.365297 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:34.393974 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:34.394308 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:34.394318 2444124 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:34.394993 2444124 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33838->127.0.0.1:34981: read: connection reset by peer
	I0110 02:31:37.568807 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.568832 2444124 ubuntu.go:182] provisioning hostname "force-systemd-env-405089"
	I0110 02:31:37.568912 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.588249 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.588558 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.588583 2444124 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-405089 && echo "force-systemd-env-405089" | sudo tee /etc/hostname
	I0110 02:31:37.746546 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.746628 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.767459 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.767772 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.767794 2444124 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-405089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-405089/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-405089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:37.917803 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:37.917839 2444124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:37.917867 2444124 ubuntu.go:190] setting up certificates
	I0110 02:31:37.917878 2444124 provision.go:84] configureAuth start
	I0110 02:31:37.917939 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:37.936050 2444124 provision.go:143] copyHostCerts
	I0110 02:31:37.936093 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936126 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:37.936143 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936221 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:37.936318 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936341 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:37.936350 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936386 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:37.936442 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936463 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:37.936471 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936496 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:37.936548 2444124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-405089 san=[127.0.0.1 192.168.85.2 force-systemd-env-405089 localhost minikube]
	I0110 02:31:38.258206 2444124 provision.go:177] copyRemoteCerts
	I0110 02:31:38.258288 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:38.258339 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.276203 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:38.381656 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:38.381728 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:38.400027 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:38.400088 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0110 02:31:38.417556 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:38.417620 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:38.438553 2444124 provision.go:87] duration metric: took 520.648879ms to configureAuth
	I0110 02:31:38.438640 2444124 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:38.438850 2444124 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:38.438923 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.456723 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.457166 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.457186 2444124 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:38.623956 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:38.623983 2444124 ubuntu.go:71] root file system type: overlay
	I0110 02:31:38.624112 2444124 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:38.624190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.651894 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.652212 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.652296 2444124 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:38.832340 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:38.832516 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.851001 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.851318 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.851335 2444124 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:39.848898 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:38.826649162 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:39.848925 2444124 machine.go:97] duration metric: took 5.483705976s to provisionDockerMachine
	I0110 02:31:39.848938 2444124 client.go:176] duration metric: took 10.263257466s to LocalClient.Create
	I0110 02:31:39.848983 2444124 start.go:167] duration metric: took 10.263350347s to libmachine.API.Create "force-systemd-env-405089"
	I0110 02:31:39.848999 2444124 start.go:293] postStartSetup for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:39.849010 2444124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:39.849143 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:39.849190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:39.867772 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:39.969324 2444124 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:39.972690 2444124 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:39.972719 2444124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:39.972731 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:39.972810 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:39.972927 2444124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:39.972937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:39.973066 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:39.981882 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:40.000017 2444124 start.go:296] duration metric: took 151.001946ms for postStartSetup
	I0110 02:31:40.000404 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.038533 2444124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/config.json ...
	I0110 02:31:40.038894 2444124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:40.038954 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.057310 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.158291 2444124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:40.163210 2444124 start.go:128] duration metric: took 10.581191023s to createHost
	I0110 02:31:40.163237 2444124 start.go:83] releasing machines lock for "force-systemd-env-405089", held for 10.581321237s
	I0110 02:31:40.163309 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.180948 2444124 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:40.181013 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.181219 2444124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:40.181281 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.201769 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.209162 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.305222 2444124 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:40.413487 2444124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:40.417954 2444124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:40.418043 2444124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:40.452568 2444124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:40.452596 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.452613 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.452712 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:40.470526 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:40.482803 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:40.492346 2444124 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:40.492457 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:40.502450 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.511527 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:40.520445 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.540761 2444124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:40.551850 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:40.563654 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:40.574796 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:40.585488 2444124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:40.595355 2444124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:40.609803 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:40.744003 2444124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:40.876690 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.876724 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.876779 2444124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:40.904144 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.918264 2444124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:40.953661 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.974405 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:40.989753 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:41.013689 2444124 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:41.018251 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:41.027476 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:41.042305 2444124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:41.204191 2444124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:41.332172 2444124 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:41.332275 2444124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:41.346373 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:41.360708 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:41.514222 2444124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:42.063171 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:42.079374 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:42.101258 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.121381 2444124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:42.317076 2444124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:42.488596 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.651589 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:42.669531 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:42.687478 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.822336 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:42.917629 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.937980 2444124 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:42.938103 2444124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:42.943727 2444124 start.go:574] Will wait 60s for crictl version
	I0110 02:31:42.943794 2444124 ssh_runner.go:195] Run: which crictl
	I0110 02:31:42.948403 2444124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:42.980867 2444124 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:42.980939 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.004114 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.042145 2444124 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:43.042280 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:43.065360 2444124 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:43.069214 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.081864 2444124 kubeadm.go:884] updating cluster {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:43.081980 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:43.082036 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.100460 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.100486 2444124 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:43.100552 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.121230 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.121256 2444124 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:43.121266 2444124 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:43.121361 2444124 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-405089 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:43.121432 2444124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:43.178538 2444124 cni.go:84] Creating CNI manager for ""
	I0110 02:31:43.178570 2444124 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:43.178597 2444124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:43.178618 2444124 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-405089 NodeName:force-systemd-env-405089 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:43.178739 2444124 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-405089"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:43.178809 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:43.186967 2444124 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:43.187037 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:43.196792 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0110 02:31:43.210260 2444124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:43.225215 2444124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:31:43.239490 2444124 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:43.243821 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.256336 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:43.411763 2444124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:43.449071 2444124 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089 for IP: 192.168.85.2
	I0110 02:31:43.449090 2444124 certs.go:195] generating shared ca certs ...
	I0110 02:31:43.449107 2444124 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:43.449242 2444124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:43.449285 2444124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:43.449293 2444124 certs.go:257] generating profile certs ...
	I0110 02:31:43.449348 2444124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key
	I0110 02:31:43.449359 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt with IP's: []
	I0110 02:31:44.085771 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt ...
	I0110 02:31:44.085806 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt: {Name:mkef9124ceed79304369528c5a27c7648b78a9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086085 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key ...
	I0110 02:31:44.086119 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key: {Name:mk76e85724a13af463ddacfcf286ac686d149ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086302 2444124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b
	I0110 02:31:44.086324 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:31:44.498570 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b ...
	I0110 02:31:44.498600 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b: {Name:mk4617469fd5fea335a0e87bd3a6539b7da9cd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498789 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b ...
	I0110 02:31:44.498804 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b: {Name:mkb480b1d60b5ebb03b826d7d02dfd7e44510312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498902 2444124 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt
	I0110 02:31:44.498990 2444124 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key
	I0110 02:31:44.499054 2444124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key
	I0110 02:31:44.499073 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt with IP's: []
	I0110 02:31:44.994504 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt ...
	I0110 02:31:44.994541 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt: {Name:mkcf4f6fccba9f412afa8632ad4d0d2e51e05241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995667 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key ...
	I0110 02:31:44.995695 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key: {Name:mka4f25cc07eefbd88194e70f96e9c6a66c304c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995867 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:44.995917 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:44.995937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:44.995956 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:44.995969 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:44.996009 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:44.996029 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:44.996041 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:44.996117 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:44.996176 2444124 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:44.996191 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:44.996234 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:44.996282 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:44.996317 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:44.996397 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:44.996456 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:44.996487 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:44.996506 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:44.997128 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:45.025285 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:45.117344 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:45.154410 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:45.182384 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:45.209059 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:31:45.237573 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:45.267287 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:45.300586 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:45.325792 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:45.348897 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:45.369906 2444124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:45.388123 2444124 ssh_runner.go:195] Run: openssl version
	I0110 02:31:45.396122 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.406059 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:45.416032 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424294 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424422 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.470580 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.479156 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.487771 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.496463 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:45.504778 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510041 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510161 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.562186 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:45.570693 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:45.585612 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.595614 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:45.604471 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609923 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609994 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.652960 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:45.665449 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:45.674251 2444124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:45.678274 2444124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:45.678327 2444124 kubeadm.go:401] StartCluster: {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:45.678449 2444124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:45.696897 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:45.715111 2444124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:45.737562 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:45.737624 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:45.753944 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:45.754029 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:45.754124 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:45.767074 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:45.767192 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:45.775076 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:45.783839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:45.783955 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:45.791557 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.800017 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:45.800155 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.807549 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:45.815839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:45.815967 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:45.823557 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:45.879151 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:45.879584 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:45.979502 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:45.979676 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:45.979749 2444124 kubeadm.go:319] OS: Linux
	I0110 02:31:45.979833 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:45.979918 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:45.979997 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:45.980082 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:45.980163 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:45.980247 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:45.980325 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:45.980413 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:45.980515 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:46.069019 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:46.069217 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:46.069354 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:46.086323 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:46.092497 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:46.092669 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:46.092770 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:46.875771 2444124 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:47.144364 2444124 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:47.314724 2444124 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:47.984584 2444124 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:48.242134 2444124 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:48.242499 2444124 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.461465 2444124 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:48.461631 2444124 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.733504 2444124 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:48.861496 2444124 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:49.185510 2444124 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:49.185598 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:49.425584 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:49.777471 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:49.961468 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:50.177454 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:50.374241 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:50.374970 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:50.381331 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:50.384846 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:31:50.384957 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:50.385056 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:50.385129 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:50.414088 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:50.414228 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:50.422787 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:50.423116 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:50.423172 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:50.599415 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:50.599570 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:50.600577 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001263977s
	I0110 02:35:50.601197 2444124 kubeadm.go:319] 
	I0110 02:35:50.601279 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:50.601345 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:50.601480 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:50.601496 2444124 kubeadm.go:319] 
	I0110 02:35:50.601596 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:50.601630 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:50.601664 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:50.601672 2444124 kubeadm.go:319] 
	I0110 02:35:50.606506 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:50.606929 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:50.607043 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:50.607291 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:35:50.607301 2444124 kubeadm.go:319] 
	I0110 02:35:50.607370 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:50.607511 2444124 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001263977s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001263977s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:50.607594 2444124 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:51.030219 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:51.043577 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:51.043642 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:51.051651 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:51.051673 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:51.051734 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:51.059812 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:51.059882 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:51.068320 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:51.076706 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:51.076822 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:51.084858 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.093615 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:51.093686 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.101862 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:51.110328 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:51.110395 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:51.118285 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:51.161915 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:51.161979 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:51.247247 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:51.247324 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:51.247366 2444124 kubeadm.go:319] OS: Linux
	I0110 02:35:51.247418 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:51.247473 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:51.247523 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:51.247577 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:51.247629 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:51.247681 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:51.247730 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:51.247783 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:51.247850 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:51.316861 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:51.316975 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:51.317095 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:51.330675 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:51.336295 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:51.336385 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:51.336458 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:51.336535 2444124 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:51.336596 2444124 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:51.336666 2444124 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:51.336720 2444124 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:51.336783 2444124 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:51.336844 2444124 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:51.336918 2444124 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:51.336991 2444124 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:51.337028 2444124 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:51.337115 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:51.445329 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:51.773916 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:51.845501 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:52.201867 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:52.810005 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:52.810953 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:52.813391 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:52.816841 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:35:52.816944 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:52.817023 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:52.828369 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:52.849764 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:52.849875 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:52.858304 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:52.858625 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:52.858672 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:53.019244 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:53.019363 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:39:53.016701 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000239671s
	I0110 02:39:53.016728 2444124 kubeadm.go:319] 
	I0110 02:39:53.016782 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:39:53.016814 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:39:53.016913 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:39:53.016917 2444124 kubeadm.go:319] 
	I0110 02:39:53.017016 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:39:53.017069 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:39:53.017100 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:39:53.017110 2444124 kubeadm.go:319] 
	I0110 02:39:53.026674 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:39:53.027207 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:39:53.027347 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:39:53.027605 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:39:53.027624 2444124 kubeadm.go:319] 
	I0110 02:39:53.027707 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:39:53.027777 2444124 kubeadm.go:403] duration metric: took 8m7.349453429s to StartCluster
	I0110 02:39:53.027818 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:39:53.027886 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:39:53.065190 2444124 cri.go:96] found id: ""
	I0110 02:39:53.065233 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.065243 2444124 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:39:53.065251 2444124 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:39:53.065314 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:39:53.090958 2444124 cri.go:96] found id: ""
	I0110 02:39:53.090984 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.090993 2444124 logs.go:284] No container was found matching "etcd"
	I0110 02:39:53.091000 2444124 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:39:53.091077 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:39:53.117931 2444124 cri.go:96] found id: ""
	I0110 02:39:53.117955 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.117964 2444124 logs.go:284] No container was found matching "coredns"
	I0110 02:39:53.117972 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:39:53.118031 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:39:53.143724 2444124 cri.go:96] found id: ""
	I0110 02:39:53.143749 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.143757 2444124 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:39:53.143764 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:39:53.143823 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:39:53.168452 2444124 cri.go:96] found id: ""
	I0110 02:39:53.168477 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.168486 2444124 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:39:53.168492 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:39:53.168550 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:39:53.194925 2444124 cri.go:96] found id: ""
	I0110 02:39:53.194960 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.194969 2444124 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:39:53.194976 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:39:53.195047 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:39:53.220058 2444124 cri.go:96] found id: ""
	I0110 02:39:53.220083 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.220100 2444124 logs.go:284] No container was found matching "kindnet"
	I0110 02:39:53.220110 2444124 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:39:53.220122 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:39:53.285618 2444124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:39:53.285639 2444124 logs.go:123] Gathering logs for Docker ...
	I0110 02:39:53.285650 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 02:39:53.308836 2444124 logs.go:123] Gathering logs for container status ...
	I0110 02:39:53.308869 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:39:53.341659 2444124 logs.go:123] Gathering logs for kubelet ...
	I0110 02:39:53.341684 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:39:53.401462 2444124 logs.go:123] Gathering logs for dmesg ...
	I0110 02:39:53.401506 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0110 02:39:53.419441 2444124 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:39:53.419490 2444124 out.go:285] * 
	* 
	W0110 02:39:53.419567 2444124 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.419587 2444124 out.go:285] * 
	* 
	W0110 02:39:53.419862 2444124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:39:53.424999 2444124 out.go:203] 
	W0110 02:39:53.428767 2444124 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.428834 2444124 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:39:53.428862 2444124 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:39:53.431997 2444124 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-405089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-405089 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-10 02:39:53.942867606 +0000 UTC m=+2778.150965213
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-405089
helpers_test.go:244: (dbg) docker inspect force-systemd-env-405089:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163",
	        "Created": "2026-01-10T02:31:33.338032745Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2445240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:31:33.470400062Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163/hostname",
	        "HostsPath": "/var/lib/docker/containers/94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163/hosts",
	        "LogPath": "/var/lib/docker/containers/94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163/94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163-json.log",
	        "Name": "/force-systemd-env-405089",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-405089:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-405089",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "94ebbf95e9a1cfb6130d91897628373d05ef3f69585aec8ed0d61fdf4619c163",
	                "LowerDir": "/var/lib/docker/overlay2/b19f36d5b7d542b8932b094e7df45a3126b93fd88ce0412ffa513b65a58da967-init/diff:/var/lib/docker/overlay2/3279adf6388395c7fd34e962c09da15366b225a7b796d4f2275704eeca225de8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b19f36d5b7d542b8932b094e7df45a3126b93fd88ce0412ffa513b65a58da967/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b19f36d5b7d542b8932b094e7df45a3126b93fd88ce0412ffa513b65a58da967/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b19f36d5b7d542b8932b094e7df45a3126b93fd88ce0412ffa513b65a58da967/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-405089",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-405089/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-405089",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-405089",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-405089",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ac77db1f86d833f3704850baf0cac526f52128d9d71ff069a13bf4fc1d59ad66",
	            "SandboxKey": "/var/run/docker/netns/ac77db1f86d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34981"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34982"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34985"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34983"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34984"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-405089": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:2d:02:30:42:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6a461f3baea083e77135a071b7bbb996a8b44fa921d0b869eaabe4fc6973b2f7",
	                    "EndpointID": "b8b59e5510a694d4f27046cb6b4f80cd861244851981a498150bf2fbe06091e1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-405089",
	                        "94ebbf95e9a1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-405089 -n force-systemd-env-405089
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-405089 -n force-systemd-env-405089: exit status 6 (329.160451ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:39:54.270297 2457275 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-405089" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-405089 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-818554 sudo cat /etc/kubernetes/kubelet.conf                                                                        │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /var/lib/kubelet/config.yaml                                                                        │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat docker --no-pager                                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/docker/daemon.json                                                                             │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo docker system info                                                                                      │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cri-dockerd --version                                                                                   │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat containerd --no-pager                                                                     │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo cat /etc/containerd/config.toml                                                                         │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo containerd config dump                                                                                  │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo systemctl cat crio --no-pager                                                                           │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ delete  │ -p offline-docker-420658                                                                                                      │ offline-docker-420658     │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │ 10 Jan 26 02:31 UTC │
	│ ssh     │ -p cilium-818554 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ -p cilium-818554 sudo crio config                                                                                             │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ delete  │ -p cilium-818554                                                                                                              │ cilium-818554             │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │ 10 Jan 26 02:31 UTC │
	│ start   │ -p force-systemd-env-405089 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-405089  │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ start   │ -p force-systemd-flag-389625 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-389625 │ jenkins │ v1.37.0 │ 10 Jan 26 02:31 UTC │                     │
	│ ssh     │ force-systemd-env-405089 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-405089  │ jenkins │ v1.37.0 │ 10 Jan 26 02:39 UTC │ 10 Jan 26 02:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:31:31
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:31:31.403273 2444942 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:31:31.403569 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403599 2444942 out.go:374] Setting ErrFile to fd 2...
	I0110 02:31:31.403618 2444942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:31:31.403919 2444942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:31:31.404424 2444942 out.go:368] Setting JSON to false
	I0110 02:31:31.405395 2444942 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":36841,"bootTime":1767975451,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:31:31.405497 2444942 start.go:143] virtualization:  
	I0110 02:31:31.408819 2444942 out.go:179] * [force-systemd-flag-389625] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:31:31.412885 2444942 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:31:31.412964 2444942 notify.go:221] Checking for updates...
	I0110 02:31:31.425190 2444942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:31:31.428163 2444942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:31:31.431030 2444942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:31:31.433941 2444942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:31:31.436853 2444942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:31:31.440329 2444942 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:31.440445 2444942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:31:31.473277 2444942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:31:31.473389 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.569510 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.559356986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.569623 2444942 docker.go:319] overlay module found
	I0110 02:31:31.577195 2444942 out.go:179] * Using the docker driver based on user configuration
	I0110 02:31:31.580216 2444942 start.go:309] selected driver: docker
	I0110 02:31:31.580239 2444942 start.go:928] validating driver "docker" against <nil>
	I0110 02:31:31.580254 2444942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:31:31.580972 2444942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:31:31.685470 2444942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2026-01-10 02:31:31.673022095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:31:31.685622 2444942 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:31:31.685842 2444942 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:31:31.695072 2444942 out.go:179] * Using Docker driver with root privileges
	I0110 02:31:31.704472 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:31.704566 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:31.704582 2444942 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 02:31:31.704671 2444942 start.go:353] cluster config:
	{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:31.716615 2444942 out.go:179] * Starting "force-systemd-flag-389625" primary control-plane node in "force-systemd-flag-389625" cluster
	I0110 02:31:31.725232 2444942 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 02:31:31.731542 2444942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:31:31.734740 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:31.734792 2444942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I0110 02:31:31.734803 2444942 cache.go:65] Caching tarball of preloaded images
	I0110 02:31:31.734922 2444942 preload.go:251] Found /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0110 02:31:31.734933 2444942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I0110 02:31:31.735052 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:31.735070 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json: {Name:mkf231dfddb62b8df14c42136e70d1c72c396e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:31.735223 2444942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:31:31.768290 2444942 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:31:31.768314 2444942 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:31:31.768329 2444942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:31:31.768360 2444942 start.go:360] acquireMachinesLock for force-systemd-flag-389625: {Name:mkda4641748142b11aadec6867161d872c9610a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:31:31.768468 2444942 start.go:364] duration metric: took 88.236µs to acquireMachinesLock for "force-systemd-flag-389625"
	I0110 02:31:31.768503 2444942 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0110 02:31:31.768575 2444942 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:31:29.585409 2444124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:29.585634 2444124 start.go:159] libmachine.API.Create for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:29.585669 2444124 client.go:173] LocalClient.Create starting
	I0110 02:31:29.585728 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:29.585764 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585784 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.585842 2444124 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:29.585863 2444124 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:29.585883 2444124 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:29.586231 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:29.610121 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:29.610191 2444124 network_create.go:284] running [docker network inspect force-systemd-env-405089] to gather additional debugging logs...
	I0110 02:31:29.610221 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089
	W0110 02:31:29.644159 2444124 cli_runner.go:211] docker network inspect force-systemd-env-405089 returned with exit code 1
	I0110 02:31:29.644186 2444124 network_create.go:287] error running [docker network inspect force-systemd-env-405089]: docker network inspect force-systemd-env-405089: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-405089 not found
	I0110 02:31:29.644198 2444124 network_create.go:289] output of [docker network inspect force-systemd-env-405089]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-405089 not found
	
	** /stderr **
	I0110 02:31:29.644302 2444124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:29.676112 2444124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:29.676635 2444124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:29.676947 2444124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:29.677429 2444124 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d6c9be719dc1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:8d:64:6b:58:be} reservation:<nil>}
	I0110 02:31:29.678964 2444124 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5e090}
	I0110 02:31:29.679020 2444124 network_create.go:124] attempt to create docker network force-systemd-env-405089 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:31:29.679130 2444124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-405089 force-systemd-env-405089
	I0110 02:31:29.775823 2444124 network_create.go:108] docker network force-systemd-env-405089 192.168.85.0/24 created
	I0110 02:31:29.775860 2444124 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-405089" container
	I0110 02:31:29.775934 2444124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:29.794158 2444124 cli_runner.go:164] Run: docker volume create force-systemd-env-405089 --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:29.812548 2444124 oci.go:103] Successfully created a docker volume force-systemd-env-405089
	I0110 02:31:29.812646 2444124 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-405089-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --entrypoint /usr/bin/test -v force-systemd-env-405089:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:30.375187 2444124 oci.go:107] Successfully prepared a docker volume force-systemd-env-405089
	I0110 02:31:30.375254 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:30.375264 2444124 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:30.375340 2444124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:33.218633 2444124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-405089:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.843257443s)
	I0110 02:31:33.218668 2444124 kic.go:203] duration metric: took 2.843399774s to extract preloaded images to volume ...
	W0110 02:31:33.218794 2444124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:33.218913 2444124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:33.308593 2444124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-405089 --name force-systemd-env-405089 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-405089 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-405089 --network force-systemd-env-405089 --ip 192.168.85.2 --volume force-systemd-env-405089:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:33.809884 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Running}}
	I0110 02:31:33.863227 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:33.899538 2444124 cli_runner.go:164] Run: docker exec force-systemd-env-405089 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:33.973139 2444124 oci.go:144] the created container "force-systemd-env-405089" has a running status.
	I0110 02:31:33.973175 2444124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa...
	I0110 02:31:34.190131 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:34.190189 2444124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:31.770687 2444942 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:31:31.770958 2444942 start.go:159] libmachine.API.Create for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:31.770996 2444942 client.go:173] LocalClient.Create starting
	I0110 02:31:31.771061 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem
	I0110 02:31:31.771107 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771131 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771194 2444942 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem
	I0110 02:31:31.771216 2444942 main.go:144] libmachine: Decoding PEM data...
	I0110 02:31:31.771231 2444942 main.go:144] libmachine: Parsing certificate...
	I0110 02:31:31.771599 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:31:31.789231 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:31:31.789311 2444942 network_create.go:284] running [docker network inspect force-systemd-flag-389625] to gather additional debugging logs...
	I0110 02:31:31.789330 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625
	W0110 02:31:31.804491 2444942 cli_runner.go:211] docker network inspect force-systemd-flag-389625 returned with exit code 1
	I0110 02:31:31.804519 2444942 network_create.go:287] error running [docker network inspect force-systemd-flag-389625]: docker network inspect force-systemd-flag-389625: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-389625 not found
	I0110 02:31:31.804531 2444942 network_create.go:289] output of [docker network inspect force-systemd-flag-389625]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-389625 not found
	
	** /stderr **
	I0110 02:31:31.804633 2444942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:31.821447 2444942 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
	I0110 02:31:31.821788 2444942 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0306382db894 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:9a:0a:12:a6:69:af} reservation:<nil>}
	I0110 02:31:31.822120 2444942 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-42f1ed7cacde IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:5d:25:88:ef:ef} reservation:<nil>}
	I0110 02:31:31.822532 2444942 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977430}
	I0110 02:31:31.822549 2444942 network_create.go:124] attempt to create docker network force-systemd-flag-389625 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:31:31.822614 2444942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-389625 force-systemd-flag-389625
	I0110 02:31:31.879729 2444942 network_create.go:108] docker network force-systemd-flag-389625 192.168.76.0/24 created
	I0110 02:31:31.879758 2444942 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-389625" container
	I0110 02:31:31.879830 2444942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:31:31.907715 2444942 cli_runner.go:164] Run: docker volume create force-systemd-flag-389625 --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:31:31.939677 2444942 oci.go:103] Successfully created a docker volume force-systemd-flag-389625
	I0110 02:31:31.939777 2444942 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:31:33.763406 2444942 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-389625-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --entrypoint /usr/bin/test -v force-systemd-flag-389625:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (1.823586252s)
	I0110 02:31:33.763439 2444942 oci.go:107] Successfully prepared a docker volume force-systemd-flag-389625
	I0110 02:31:33.763488 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:33.763505 2444942 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:31:33.763585 2444942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:31:34.225021 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.249136 2444124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:34.249157 2444124 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-405089 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:34.332008 2444124 cli_runner.go:164] Run: docker container inspect force-systemd-env-405089 --format={{.State.Status}}
	I0110 02:31:34.365192 2444124 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:34.365297 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:34.393974 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:34.394308 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:34.394318 2444124 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:34.394993 2444124 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33838->127.0.0.1:34981: read: connection reset by peer
	I0110 02:31:37.568807 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.568832 2444124 ubuntu.go:182] provisioning hostname "force-systemd-env-405089"
	I0110 02:31:37.568912 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.588249 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.588558 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.588583 2444124 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-405089 && echo "force-systemd-env-405089" | sudo tee /etc/hostname
	I0110 02:31:37.746546 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-405089
	
	I0110 02:31:37.746628 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:37.767459 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.767772 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:37.767794 2444124 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-405089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-405089/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-405089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:37.917803 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:37.917839 2444124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:37.917867 2444124 ubuntu.go:190] setting up certificates
	I0110 02:31:37.917878 2444124 provision.go:84] configureAuth start
	I0110 02:31:37.917939 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:37.936050 2444124 provision.go:143] copyHostCerts
	I0110 02:31:37.936093 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936126 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:37.936143 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:37.936221 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:37.936318 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936341 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:37.936350 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:37.936386 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:37.936442 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936463 2444124 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:37.936471 2444124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:37.936496 2444124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:37.936548 2444124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-405089 san=[127.0.0.1 192.168.85.2 force-systemd-env-405089 localhost minikube]
	I0110 02:31:38.258206 2444124 provision.go:177] copyRemoteCerts
	I0110 02:31:38.258288 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:38.258339 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.276203 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:38.381656 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:38.381728 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:38.400027 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:38.400088 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0110 02:31:38.417556 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:38.417620 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:38.438553 2444124 provision.go:87] duration metric: took 520.648879ms to configureAuth
	I0110 02:31:38.438640 2444124 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:38.438850 2444124 config.go:182] Loaded profile config "force-systemd-env-405089": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:38.438923 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.456723 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.457166 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.457186 2444124 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:38.623956 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:38.623983 2444124 ubuntu.go:71] root file system type: overlay
	I0110 02:31:38.624112 2444124 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:38.624190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.651894 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.652212 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.652296 2444124 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:38.832340 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:38.832516 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:38.851001 2444124 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:38.851318 2444124 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34981 <nil> <nil>}
	I0110 02:31:38.851335 2444124 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:36.676943 2444942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-389625:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (2.913316987s)
	I0110 02:31:36.676976 2444942 kic.go:203] duration metric: took 2.913468033s to extract preloaded images to volume ...
	W0110 02:31:36.677157 2444942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:31:36.677267 2444942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:31:36.733133 2444942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-389625 --name force-systemd-flag-389625 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-389625 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-389625 --network force-systemd-flag-389625 --ip 192.168.76.2 --volume force-systemd-flag-389625:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:31:37.020083 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Running}}
	I0110 02:31:37.049554 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.073410 2444942 cli_runner.go:164] Run: docker exec force-systemd-flag-389625 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:31:37.123872 2444942 oci.go:144] the created container "force-systemd-flag-389625" has a running status.
	I0110 02:31:37.123914 2444942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa...
	I0110 02:31:37.219546 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:31:37.219643 2444942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:31:37.246178 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.265663 2444942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:31:37.265687 2444942 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-389625 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:31:37.315490 2444942 cli_runner.go:164] Run: docker container inspect force-systemd-flag-389625 --format={{.State.Status}}
	I0110 02:31:37.344025 2444942 machine.go:94] provisionDockerMachine start ...
	I0110 02:31:37.344113 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:37.365329 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:37.366213 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:37.366237 2444942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:31:37.366917 2444942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:31:40.525424 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.525452 2444942 ubuntu.go:182] provisioning hostname "force-systemd-flag-389625"
	I0110 02:31:40.525529 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.550883 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.551514 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.551534 2444942 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-389625 && echo "force-systemd-flag-389625" | sudo tee /etc/hostname
	I0110 02:31:40.741599 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-389625
	
	I0110 02:31:40.741787 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:40.769891 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:40.770349 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:40.770376 2444942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-389625' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-389625/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-389625' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:31:40.933268 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:31:40.933300 2444942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2221005/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2221005/.minikube}
	I0110 02:31:40.933334 2444942 ubuntu.go:190] setting up certificates
	I0110 02:31:40.933344 2444942 provision.go:84] configureAuth start
	I0110 02:31:40.933425 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:40.954041 2444942 provision.go:143] copyHostCerts
	I0110 02:31:40.954074 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954109 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem, removing ...
	I0110 02:31:40.954115 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem
	I0110 02:31:40.954187 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.pem (1082 bytes)
	I0110 02:31:40.954287 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954306 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem, removing ...
	I0110 02:31:40.954311 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem
	I0110 02:31:40.954348 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/cert.pem (1123 bytes)
	I0110 02:31:40.954426 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954443 2444942 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem, removing ...
	I0110 02:31:40.954447 2444942 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem
	I0110 02:31:40.954472 2444942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2221005/.minikube/key.pem (1679 bytes)
	I0110 02:31:40.954527 2444942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-389625 san=[127.0.0.1 192.168.76.2 force-systemd-flag-389625 localhost minikube]
	I0110 02:31:41.170708 2444942 provision.go:177] copyRemoteCerts
	I0110 02:31:41.170784 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:31:41.170832 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.191286 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:41.302379 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:31:41.302491 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:31:41.325187 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:31:41.325316 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:31:41.349568 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:31:41.349680 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:31:41.371181 2444942 provision.go:87] duration metric: took 437.80859ms to configureAuth
	I0110 02:31:41.371265 2444942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:31:41.371507 2444942 config.go:182] Loaded profile config "force-systemd-flag-389625": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:31:41.371603 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.397226 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.397537 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.397547 2444942 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0110 02:31:39.848898 2444124 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:38.826649162 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:39.848925 2444124 machine.go:97] duration metric: took 5.483705976s to provisionDockerMachine
	I0110 02:31:39.848938 2444124 client.go:176] duration metric: took 10.263257466s to LocalClient.Create
	I0110 02:31:39.848983 2444124 start.go:167] duration metric: took 10.263350347s to libmachine.API.Create "force-systemd-env-405089"
	I0110 02:31:39.848999 2444124 start.go:293] postStartSetup for "force-systemd-env-405089" (driver="docker")
	I0110 02:31:39.849010 2444124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:39.849143 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:39.849190 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:39.867772 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:39.969324 2444124 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:39.972690 2444124 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:39.972719 2444124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:39.972731 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:39.972810 2444124 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:39.972927 2444124 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:39.972937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:39.973066 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:39.981882 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:40.000017 2444124 start.go:296] duration metric: took 151.001946ms for postStartSetup
	I0110 02:31:40.000404 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.038533 2444124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/config.json ...
	I0110 02:31:40.038894 2444124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:40.038954 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.057310 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.158291 2444124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:40.163210 2444124 start.go:128] duration metric: took 10.581191023s to createHost
	I0110 02:31:40.163237 2444124 start.go:83] releasing machines lock for "force-systemd-env-405089", held for 10.581321237s
	I0110 02:31:40.163309 2444124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-405089
	I0110 02:31:40.180948 2444124 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:40.181013 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.181219 2444124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:40.181281 2444124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-405089
	I0110 02:31:40.201769 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.209162 2444124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34981 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-env-405089/id_rsa Username:docker}
	I0110 02:31:40.305222 2444124 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:40.413487 2444124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:40.417954 2444124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:40.418043 2444124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:40.452568 2444124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:40.452596 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.452613 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.452712 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:40.470526 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:40.482803 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:40.492346 2444124 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:40.492457 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:40.502450 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.511527 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:40.520445 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:40.540761 2444124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:40.551850 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:40.563654 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:40.574796 2444124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:40.585488 2444124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:40.595355 2444124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:40.609803 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:40.744003 2444124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:40.876690 2444124 start.go:496] detecting cgroup driver to use...
	I0110 02:31:40.876724 2444124 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:40.876779 2444124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:40.904144 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.918264 2444124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:40.953661 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:40.974405 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:40.989753 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:41.013689 2444124 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:41.018251 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:41.027476 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:41.042305 2444124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:41.204191 2444124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:41.332172 2444124 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:41.332275 2444124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:41.346373 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:41.360708 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:41.514222 2444124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:42.063171 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:42.079374 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:42.101258 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.121381 2444124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:42.317076 2444124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:42.488596 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.651589 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:42.669531 2444124 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:42.687478 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:42.822336 2444124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:42.917629 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:42.937980 2444124 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:42.938103 2444124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:42.943727 2444124 start.go:574] Will wait 60s for crictl version
	I0110 02:31:42.943794 2444124 ssh_runner.go:195] Run: which crictl
	I0110 02:31:42.948403 2444124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:42.980867 2444124 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:42.980939 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.004114 2444124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:43.042145 2444124 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:43.042280 2444124 cli_runner.go:164] Run: docker network inspect force-systemd-env-405089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:43.065360 2444124 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:43.069214 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.081864 2444124 kubeadm.go:884] updating cluster {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:43.081980 2444124 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:43.082036 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.100460 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.100486 2444124 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:43.100552 2444124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:43.121230 2444124 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:43.121256 2444124 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:43.121266 2444124 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:43.121361 2444124 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-405089 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:43.121432 2444124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:43.178538 2444124 cni.go:84] Creating CNI manager for ""
	I0110 02:31:43.178570 2444124 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:43.178597 2444124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:43.178618 2444124 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-405089 NodeName:force-systemd-env-405089 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:43.178739 2444124 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-405089"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:43.178809 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:43.186967 2444124 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:43.187037 2444124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:43.196792 2444124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0110 02:31:43.210260 2444124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:43.225215 2444124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 02:31:43.239490 2444124 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:43.243821 2444124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:43.256336 2444124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:43.411763 2444124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:43.449071 2444124 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089 for IP: 192.168.85.2
	I0110 02:31:43.449090 2444124 certs.go:195] generating shared ca certs ...
	I0110 02:31:43.449107 2444124 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:43.449242 2444124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:43.449285 2444124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:43.449293 2444124 certs.go:257] generating profile certs ...
	I0110 02:31:43.449348 2444124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key
	I0110 02:31:43.449359 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt with IP's: []
	I0110 02:31:44.085771 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt ...
	I0110 02:31:44.085806 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.crt: {Name:mkef9124ceed79304369528c5a27c7648b78a9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086085 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key ...
	I0110 02:31:44.086119 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/client.key: {Name:mk76e85724a13af463ddacfcf286ac686d149ee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.086302 2444124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b
	I0110 02:31:44.086324 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:31:44.498570 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b ...
	I0110 02:31:44.498600 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b: {Name:mk4617469fd5fea335a0e87bd3a6539b7da9cd53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498789 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b ...
	I0110 02:31:44.498804 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b: {Name:mkb480b1d60b5ebb03b826d7d02dfd7e44510312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.498902 2444124 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt
	I0110 02:31:44.498990 2444124 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key.6f34228b -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key
	I0110 02:31:44.499054 2444124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key
	I0110 02:31:44.499073 2444124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt with IP's: []
	I0110 02:31:44.994504 2444124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt ...
	I0110 02:31:44.994541 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt: {Name:mkcf4f6fccba9f412afa8632ad4d0d2e51e05241 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995667 2444124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key ...
	I0110 02:31:44.995695 2444124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key: {Name:mka4f25cc07eefbd88194e70f96e9c6a66c304c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:44.995867 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:44.995917 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:44.995937 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:44.995956 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:44.995969 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:44.996009 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:44.996029 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:44.996041 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:44.996117 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:44.996176 2444124 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:44.996191 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:44.996234 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:44.996282 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:44.996317 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:44.996397 2444124 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:44.996456 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:44.996487 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:44.996506 2444124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:44.997128 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:45.025285 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:45.117344 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:45.154410 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:45.182384 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:45.209059 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:31:45.237573 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:45.267287 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-env-405089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:45.300586 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:45.325792 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:45.348897 2444124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:45.369906 2444124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:45.388123 2444124 ssh_runner.go:195] Run: openssl version
	I0110 02:31:45.396122 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.406059 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:45.416032 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424294 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.424422 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:45.470580 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.479156 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:45.487771 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.496463 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:45.504778 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510041 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.510161 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:45.562186 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:45.570693 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:45.585612 2444124 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.595614 2444124 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:45.604471 2444124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609923 2444124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.609994 2444124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:45.652960 2444124 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:45.665449 2444124 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:45.674251 2444124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:45.678274 2444124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:45.678327 2444124 kubeadm.go:401] StartCluster: {Name:force-systemd-env-405089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-405089 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:45.678449 2444124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:45.696897 2444124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:45.715111 2444124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:45.737562 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:45.737624 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:45.753944 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:45.754029 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:45.754124 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:45.767074 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:45.767192 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:45.775076 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:45.783839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:45.783955 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:45.791557 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.800017 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:45.800155 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:45.807549 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:45.815839 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:45.815967 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:45.823557 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:45.879151 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:45.879584 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:45.979502 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:45.979676 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:45.979749 2444124 kubeadm.go:319] OS: Linux
	I0110 02:31:45.979833 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:45.979918 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:45.979997 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:45.980082 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:45.980163 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:45.980247 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:45.980325 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:45.980413 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:45.980515 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:46.069019 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:46.069217 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:46.069354 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:46.086323 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:41.564217 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0110 02:31:41.564316 2444942 ubuntu.go:71] root file system type: overlay
	I0110 02:31:41.564502 2444942 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0110 02:31:41.564636 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.591765 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.592086 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.592175 2444942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0110 02:31:41.761531 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0110 02:31:41.761616 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:41.782449 2444942 main.go:144] libmachine: Using SSH client type: native
	I0110 02:31:41.782827 2444942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 34986 <nil> <nil>}
	I0110 02:31:41.782851 2444942 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0110 02:31:43.042474 2444942 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2026-01-10 02:31:41.754593192 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0110 02:31:43.042496 2444942 machine.go:97] duration metric: took 5.698448584s to provisionDockerMachine
	I0110 02:31:43.042508 2444942 client.go:176] duration metric: took 11.271502022s to LocalClient.Create
	I0110 02:31:43.042522 2444942 start.go:167] duration metric: took 11.271565709s to libmachine.API.Create "force-systemd-flag-389625"
	I0110 02:31:43.042529 2444942 start.go:293] postStartSetup for "force-systemd-flag-389625" (driver="docker")
	I0110 02:31:43.042539 2444942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:31:43.042594 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:31:43.042629 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.076614 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.196482 2444942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:31:43.201700 2444942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:31:43.201726 2444942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:31:43.201737 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/addons for local assets ...
	I0110 02:31:43.201796 2444942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2221005/.minikube/files for local assets ...
	I0110 02:31:43.201877 2444942 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> 22228772.pem in /etc/ssl/certs
	I0110 02:31:43.201885 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /etc/ssl/certs/22228772.pem
	I0110 02:31:43.201986 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:31:43.214196 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:43.241904 2444942 start.go:296] duration metric: took 199.360809ms for postStartSetup
	I0110 02:31:43.242273 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.263273 2444942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/config.json ...
	I0110 02:31:43.263543 2444942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:31:43.263584 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.283380 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.391153 2444942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:31:43.396781 2444942 start.go:128] duration metric: took 11.628189455s to createHost
	I0110 02:31:43.396804 2444942 start.go:83] releasing machines lock for "force-systemd-flag-389625", held for 11.628322055s
	I0110 02:31:43.396875 2444942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-389625
	I0110 02:31:43.415596 2444942 ssh_runner.go:195] Run: cat /version.json
	I0110 02:31:43.415661 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.415925 2444942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:31:43.415983 2444942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-389625
	I0110 02:31:43.442514 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.477676 2444942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34986 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/force-systemd-flag-389625/id_rsa Username:docker}
	I0110 02:31:43.711077 2444942 ssh_runner.go:195] Run: systemctl --version
	I0110 02:31:43.721326 2444942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:31:43.726734 2444942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:31:43.726807 2444942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:31:43.760612 2444942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:31:43.760636 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:43.760650 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:43.760747 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:43.776486 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0110 02:31:43.785831 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0110 02:31:43.795047 2444942 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I0110 02:31:43.795106 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0110 02:31:43.804716 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.814084 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0110 02:31:43.823155 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0110 02:31:43.832515 2444942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:31:43.841283 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0110 02:31:43.850677 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0110 02:31:43.859949 2444942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0110 02:31:43.869426 2444942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:31:43.878026 2444942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:31:43.886454 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.030564 2444942 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0110 02:31:44.134281 2444942 start.go:496] detecting cgroup driver to use...
	I0110 02:31:44.134314 2444942 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:31:44.134390 2444942 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0110 02:31:44.164357 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.178141 2444942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:31:44.203502 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:31:44.225293 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0110 02:31:44.259875 2444942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:31:44.298197 2444942 ssh_runner.go:195] Run: which cri-dockerd
	I0110 02:31:44.302282 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0110 02:31:44.310035 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0110 02:31:44.323184 2444942 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0110 02:31:44.479958 2444942 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0110 02:31:44.628745 2444942 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I0110 02:31:44.628855 2444942 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0110 02:31:44.646424 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0110 02:31:44.659407 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:44.806969 2444942 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0110 02:31:45.429132 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:31:45.449741 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0110 02:31:45.466128 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:45.483936 2444942 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0110 02:31:45.652722 2444942 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0110 02:31:45.851372 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.020791 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0110 02:31:46.040175 2444942 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0110 02:31:46.054245 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.202922 2444942 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0110 02:31:46.282568 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0110 02:31:46.299250 2444942 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0110 02:31:46.299324 2444942 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0110 02:31:46.304150 2444942 start.go:574] Will wait 60s for crictl version
	I0110 02:31:46.304219 2444942 ssh_runner.go:195] Run: which crictl
	I0110 02:31:46.309882 2444942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:31:46.365333 2444942 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I0110 02:31:46.365407 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.397294 2444942 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0110 02:31:46.430776 2444942 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I0110 02:31:46.430856 2444942 cli_runner.go:164] Run: docker network inspect force-systemd-flag-389625 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:31:46.446745 2444942 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:31:46.450899 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.460438 2444942 kubeadm.go:884] updating cluster {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:31:46.460546 2444942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I0110 02:31:46.460598 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.482795 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.482816 2444942 docker.go:624] Images already preloaded, skipping extraction
	I0110 02:31:46.482894 2444942 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0110 02:31:46.503709 2444942 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0110 02:31:46.503732 2444942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:31:46.503741 2444942 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I0110 02:31:46.503828 2444942 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-389625 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:31:46.503890 2444942 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0110 02:31:46.568277 2444942 cni.go:84] Creating CNI manager for ""
	I0110 02:31:46.568357 2444942 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 02:31:46.568393 2444942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:31:46.568445 2444942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-389625 NodeName:force-systemd-flag-389625 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:31:46.568620 2444942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-389625"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:31:46.568728 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:31:46.576738 2444942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:31:46.576804 2444942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:31:46.584333 2444942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0110 02:31:46.597086 2444942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:31:46.609903 2444942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I0110 02:31:46.623198 2444942 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:31:46.627340 2444942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:31:46.637410 2444942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:31:46.813351 2444942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:31:46.853529 2444942 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625 for IP: 192.168.76.2
	I0110 02:31:46.853605 2444942 certs.go:195] generating shared ca certs ...
	I0110 02:31:46.853636 2444942 certs.go:227] acquiring lock for ca certs: {Name:mk3365aee58ca444945faa08aa6e1c1a1b730f09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.853847 2444942 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key
	I0110 02:31:46.853930 2444942 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key
	I0110 02:31:46.853957 2444942 certs.go:257] generating profile certs ...
	I0110 02:31:46.854046 2444942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key
	I0110 02:31:46.854089 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt with IP's: []
	I0110 02:31:46.947349 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt ...
	I0110 02:31:46.947424 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.crt: {Name:mkc2a0e18aeb9bc161a2b7bdc69edce7c225059e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947656 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key ...
	I0110 02:31:46.947692 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/client.key: {Name:mkbec37be7fe98f01eeac1efcff3341ee3c0872e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:46.947838 2444942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11
	I0110 02:31:46.947881 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:31:47.211172 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 ...
	I0110 02:31:47.211243 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11: {Name:mkb26b4fa8a855d6ab75cf6ae5986179421e433d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211463 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 ...
	I0110 02:31:47.211500 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11: {Name:mkaede7629652a36b550448eb511dc667db770a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.211648 2444942 certs.go:382] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt
	I0110 02:31:47.211795 2444942 certs.go:386] copying /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key.754ddc11 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key
	I0110 02:31:47.211904 2444942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key
	I0110 02:31:47.211947 2444942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt with IP's: []
	I0110 02:31:47.431675 2444942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt ...
	I0110 02:31:47.431751 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt: {Name:mkf0c56bc6a962d35ef411e8b1db0da0dee06e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.431961 2444942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key ...
	I0110 02:31:47.431997 2444942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key: {Name:mk1b1a2249d88d087b490ca8bc1af9bab6c5cd65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:31:47.432136 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:31:47.432180 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:31:47.432212 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:31:47.432258 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:31:47.432293 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:31:47.432322 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:31:47.432364 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:31:47.432398 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:31:47.432482 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem (1338 bytes)
	W0110 02:31:47.432539 2444942 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877_empty.pem, impossibly tiny 0 bytes
	I0110 02:31:47.432564 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 02:31:47.432623 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:31:47.432673 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:31:47.432730 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/key.pem (1679 bytes)
	I0110 02:31:47.432801 2444942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem (1708 bytes)
	I0110 02:31:47.432861 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.432896 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem -> /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.432926 2444942 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem -> /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.433610 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:31:47.453555 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:31:47.472772 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:31:47.493487 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0110 02:31:47.513383 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:31:47.534626 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:31:47.554446 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:31:47.574178 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/force-systemd-flag-389625/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:31:47.594420 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:31:47.614798 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/certs/2222877.pem --> /usr/share/ca-certificates/2222877.pem (1338 bytes)
	I0110 02:31:47.635266 2444942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/ssl/certs/22228772.pem --> /usr/share/ca-certificates/22228772.pem (1708 bytes)
	I0110 02:31:47.655406 2444942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:31:47.670021 2444942 ssh_runner.go:195] Run: openssl version
	I0110 02:31:47.676614 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.684815 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:31:47.693216 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697583 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.697646 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:31:47.771210 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:31:47.792458 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:31:47.806445 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.828400 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/2222877.pem /etc/ssl/certs/2222877.pem
	I0110 02:31:47.841461 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847202 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 02:00 /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.847317 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222877.pem
	I0110 02:31:47.889947 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:31:47.898442 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/2222877.pem /etc/ssl/certs/51391683.0
	I0110 02:31:47.910391 2444942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.918871 2444942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/22228772.pem /etc/ssl/certs/22228772.pem
	I0110 02:31:47.928363 2444942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932866 2444942 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 02:00 /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.932981 2444942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22228772.pem
	I0110 02:31:47.975611 2444942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.984122 2444942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/22228772.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:31:47.992727 2444942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:31:47.997508 2444942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:31:47.997608 2444942 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-389625 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-389625 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:31:47.997780 2444942 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0110 02:31:48.015607 2444942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:31:48.027609 2444942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:31:48.037195 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:31:48.037364 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:31:48.049830 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:31:48.049901 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:31:48.049986 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:31:48.059872 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:31:48.059993 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:31:48.068889 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:31:48.079048 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:31:48.079166 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:31:48.088092 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.098007 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:31:48.098121 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:31:48.107267 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:31:48.117920 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:31:48.118032 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:31:48.127917 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:31:48.180767 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:31:48.180909 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:31:48.290339 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:31:48.290624 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:31:48.290676 2444942 kubeadm.go:319] OS: Linux
	I0110 02:31:48.290728 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:31:48.290780 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:31:48.290831 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:31:48.290894 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:31:48.290946 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:31:48.291013 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:31:48.291064 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:31:48.291119 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:31:48.291170 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:31:48.376921 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:31:48.377171 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:31:48.377352 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:31:48.409493 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:31:46.092497 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:46.092669 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:46.092770 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:46.875771 2444124 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:47.144364 2444124 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:47.314724 2444124 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:47.984584 2444124 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:48.242134 2444124 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:48.242499 2444124 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.461465 2444124 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:48.461631 2444124 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:31:48.733504 2444124 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:48.861496 2444124 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:49.185510 2444124 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:49.185598 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:49.425584 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:49.777471 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:49.961468 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:50.177454 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:50.374241 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:50.374970 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:50.381331 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:48.416465 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:31:48.416688 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:31:48.416848 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:31:48.613948 2444942 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:31:49.073506 2444942 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:31:49.428686 2444942 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:31:49.712507 2444942 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:31:49.836655 2444942 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:31:49.837353 2444942 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.119233 2444942 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:31:50.120016 2444942 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:31:50.479427 2444942 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:31:50.633494 2444942 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:31:50.705818 2444942 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:31:50.706064 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:31:50.768089 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:31:50.918537 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:31:51.105411 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:31:51.794074 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:31:52.020214 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:31:52.020319 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:31:52.025960 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:31:50.384846 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:31:50.384957 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:50.385056 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:50.385129 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:50.414088 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:50.414228 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:50.422787 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:50.423116 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:50.423172 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:50.599415 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:50.599570 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:31:52.029579 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:31:52.029696 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:31:52.030816 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:31:52.032102 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:31:52.049145 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:31:52.049263 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:31:52.057814 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:31:52.058122 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:31:52.058167 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:31:52.196343 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:31:52.196468 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:50.600577 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001263977s
	I0110 02:35:50.601197 2444124 kubeadm.go:319] 
	I0110 02:35:50.601279 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:50.601345 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:50.601480 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:50.601496 2444124 kubeadm.go:319] 
	I0110 02:35:50.601596 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:50.601630 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:50.601664 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:50.601672 2444124 kubeadm.go:319] 
	I0110 02:35:50.606506 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:50.606929 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:50.607043 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:50.607291 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:35:50.607301 2444124 kubeadm.go:319] 
	I0110 02:35:50.607370 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:50.607511 2444124 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-405089 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001263977s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:50.607594 2444124 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:51.030219 2444124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:51.043577 2444124 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:51.043642 2444124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:51.051651 2444124 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:51.051673 2444124 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:51.051734 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:51.059812 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:51.059882 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:51.068320 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:51.076706 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:51.076822 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:51.084858 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.093615 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:51.093686 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:51.101862 2444124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:51.110328 2444124 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:51.110395 2444124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:51.118285 2444124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:51.161915 2444124 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:51.161979 2444124 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:51.247247 2444124 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:51.247324 2444124 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:51.247366 2444124 kubeadm.go:319] OS: Linux
	I0110 02:35:51.247418 2444124 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:51.247473 2444124 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:51.247523 2444124 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:51.247577 2444124 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:51.247629 2444124 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:51.247681 2444124 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:51.247730 2444124 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:51.247783 2444124 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:51.247850 2444124 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:51.316861 2444124 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:51.316975 2444124 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:51.317095 2444124 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:51.330675 2444124 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:51.336295 2444124 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:51.336385 2444124 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:51.336458 2444124 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:51.336535 2444124 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:51.336596 2444124 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:51.336666 2444124 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:51.336720 2444124 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:51.336783 2444124 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:51.336844 2444124 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:51.336918 2444124 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:51.336991 2444124 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:51.337028 2444124 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:51.337115 2444124 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:51.445329 2444124 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:51.773916 2444124 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:51.845501 2444124 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:52.201867 2444124 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:52.810005 2444124 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:52.810953 2444124 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:52.813391 2444124 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:52.196251 2444942 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000438294s
	I0110 02:35:52.196284 2444942 kubeadm.go:319] 
	I0110 02:35:52.196342 2444942 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:35:52.196375 2444942 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:35:52.196480 2444942 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:35:52.196486 2444942 kubeadm.go:319] 
	I0110 02:35:52.196591 2444942 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:35:52.196622 2444942 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:35:52.196653 2444942 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:35:52.196658 2444942 kubeadm.go:319] 
	I0110 02:35:52.202848 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:35:52.203270 2444942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:35:52.203377 2444942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:35:52.203640 2444942 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:35:52.203646 2444942 kubeadm.go:319] 
	I0110 02:35:52.203714 2444942 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:35:52.203844 2444942 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-389625 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000438294s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:35:52.203917 2444942 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0110 02:35:52.668064 2444942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:35:52.684406 2444942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:35:52.684471 2444942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:35:52.694960 2444942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:35:52.695030 2444942 kubeadm.go:158] found existing configuration files:
	
	I0110 02:35:52.695114 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:35:52.703880 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:35:52.703940 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:35:52.712165 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:35:52.721863 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:35:52.721985 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:35:52.731171 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.740287 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:35:52.740404 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:35:52.748618 2444942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:35:52.757969 2444942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:35:52.758029 2444942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:35:52.766204 2444942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:35:52.819064 2444942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:35:52.819481 2444942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:35:52.927559 2444942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:35:52.927642 2444942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:35:52.927679 2444942 kubeadm.go:319] OS: Linux
	I0110 02:35:52.927725 2444942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:35:52.927773 2444942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:35:52.927829 2444942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:35:52.927879 2444942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:35:52.927933 2444942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:35:52.927982 2444942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:35:52.928027 2444942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:35:52.928076 2444942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:35:52.928122 2444942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:35:53.012278 2444942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:35:53.012391 2444942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:35:53.012483 2444942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:35:53.037432 2444942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:35:52.816841 2444124 out.go:252]   - Booting up control plane ...
	I0110 02:35:52.816944 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:52.817023 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:52.828369 2444124 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:52.849764 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:52.849875 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:52.858304 2444124 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:52.858625 2444124 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:52.858672 2444124 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:53.019244 2444124 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:53.019363 2444124 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:35:53.040921 2444942 out.go:252]   - Generating certificates and keys ...
	I0110 02:35:53.041059 2444942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:35:53.041136 2444942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:35:53.041218 2444942 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:35:53.041284 2444942 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:35:53.041359 2444942 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:35:53.041417 2444942 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:35:53.041484 2444942 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:35:53.041550 2444942 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:35:53.041630 2444942 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:35:53.041707 2444942 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:35:53.041749 2444942 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:35:53.041814 2444942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:35:53.331718 2444942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:35:53.451638 2444942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:35:53.804134 2444942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:35:54.036793 2444942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:35:54.605846 2444942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:35:54.606454 2444942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:35:54.608995 2444942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:35:54.612162 2444942 out.go:252]   - Booting up control plane ...
	I0110 02:35:54.612265 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:35:54.612343 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:35:54.612409 2444942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:35:54.632870 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:35:54.633407 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:35:54.640913 2444942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:35:54.641255 2444942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:35:54.641302 2444942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:35:54.777508 2444942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:35:54.777628 2444942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:39:53.016701 2444124 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000239671s
	I0110 02:39:53.016728 2444124 kubeadm.go:319] 
	I0110 02:39:53.016782 2444124 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:39:53.016814 2444124 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:39:53.016913 2444124 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:39:53.016917 2444124 kubeadm.go:319] 
	I0110 02:39:53.017016 2444124 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:39:53.017069 2444124 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:39:53.017100 2444124 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:39:53.017110 2444124 kubeadm.go:319] 
	I0110 02:39:53.026674 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:39:53.027207 2444124 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:39:53.027347 2444124 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:39:53.027605 2444124 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:39:53.027624 2444124 kubeadm.go:319] 
	I0110 02:39:53.027707 2444124 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:39:53.027777 2444124 kubeadm.go:403] duration metric: took 8m7.349453429s to StartCluster
	I0110 02:39:53.027818 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:39:53.027886 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:39:53.065190 2444124 cri.go:96] found id: ""
	I0110 02:39:53.065233 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.065243 2444124 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:39:53.065251 2444124 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:39:53.065314 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:39:53.090958 2444124 cri.go:96] found id: ""
	I0110 02:39:53.090984 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.090993 2444124 logs.go:284] No container was found matching "etcd"
	I0110 02:39:53.091000 2444124 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:39:53.091077 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:39:53.117931 2444124 cri.go:96] found id: ""
	I0110 02:39:53.117955 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.117964 2444124 logs.go:284] No container was found matching "coredns"
	I0110 02:39:53.117972 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:39:53.118031 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:39:53.143724 2444124 cri.go:96] found id: ""
	I0110 02:39:53.143749 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.143757 2444124 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:39:53.143764 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:39:53.143823 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:39:53.168452 2444124 cri.go:96] found id: ""
	I0110 02:39:53.168477 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.168486 2444124 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:39:53.168492 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:39:53.168550 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:39:53.194925 2444124 cri.go:96] found id: ""
	I0110 02:39:53.194960 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.194969 2444124 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:39:53.194976 2444124 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:39:53.195047 2444124 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:39:53.220058 2444124 cri.go:96] found id: ""
	I0110 02:39:53.220083 2444124 logs.go:282] 0 containers: []
	W0110 02:39:53.220100 2444124 logs.go:284] No container was found matching "kindnet"
	I0110 02:39:53.220110 2444124 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:39:53.220122 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:39:53.285618 2444124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:39:53.276636    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.277286    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.278970    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.279530    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:53.281145    5574 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:39:53.285639 2444124 logs.go:123] Gathering logs for Docker ...
	I0110 02:39:53.285650 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0110 02:39:53.308836 2444124 logs.go:123] Gathering logs for container status ...
	I0110 02:39:53.308869 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:39:53.341659 2444124 logs.go:123] Gathering logs for kubelet ...
	I0110 02:39:53.341684 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:39:53.401462 2444124 logs.go:123] Gathering logs for dmesg ...
	I0110 02:39:53.401506 2444124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0110 02:39:53.419441 2444124 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:39:53.419490 2444124 out.go:285] * 
	W0110 02:39:53.419567 2444124 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.419587 2444124 out.go:285] * 
	W0110 02:39:53.419862 2444124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:39:53.424999 2444124 out.go:203] 
	W0110 02:39:53.428767 2444124 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000239671s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:39:53.428834 2444124 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:39:53.428862 2444124 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:39:53.431997 2444124 out.go:203] 
	
	
	==> Docker <==
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.728611871Z" level=info msg="Restoring containers: start."
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.737599966Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.753456450Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.971554337Z" level=info msg="Loading containers: done."
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.989665514Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.989717312Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Jan 10 02:31:41 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:41.989753701Z" level=info msg="Initializing buildkit"
	Jan 10 02:31:42 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:42.047228484Z" level=info msg="Completed buildkit initialization"
	Jan 10 02:31:42 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:42.059790049Z" level=info msg="Daemon has completed initialization"
	Jan 10 02:31:42 force-systemd-env-405089 systemd[1]: Started docker.service - Docker Application Container Engine.
	Jan 10 02:31:42 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:42.063091532Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 10 02:31:42 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:42.063444395Z" level=info msg="API listen on /run/docker.sock"
	Jan 10 02:31:42 force-systemd-env-405089 dockerd[1145]: time="2026-01-10T02:31:42.063466811Z" level=info msg="API listen on [::]:2376"
	Jan 10 02:31:42 force-systemd-env-405089 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Start docker client with request timeout 0s"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Loaded network plugin cni"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Setting cgroupDriver systemd"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jan 10 02:31:42 force-systemd-env-405089 cri-dockerd[1429]: time="2026-01-10T02:31:42Z" level=info msg="Start cri-dockerd grpc backend"
	Jan 10 02:31:42 force-systemd-env-405089 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:39:54.920095    5720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:54.921281    5720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:54.922925    5720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:54.923394    5720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:39:54.924976    5720 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 01:53] kauditd_printk_skb: 8 callbacks suppressed
	[Jan10 02:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 02:39:54 up 10:22,  0 user,  load average: 0.04, 0.83, 1.77
	Linux force-systemd-env-405089 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 02:39:51 force-systemd-env-405089 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:52 force-systemd-env-405089 kubelet[5498]: E0110 02:39:52.263645    5498 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:52 force-systemd-env-405089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:53 force-systemd-env-405089 kubelet[5504]: E0110 02:39:53.019433    5504 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:53 force-systemd-env-405089 kubelet[5598]: E0110 02:39:53.778633    5598 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:53 force-systemd-env-405089 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:39:54 force-systemd-env-405089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Jan 10 02:39:54 force-systemd-env-405089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:54 force-systemd-env-405089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:39:54 force-systemd-env-405089 kubelet[5643]: E0110 02:39:54.541900    5643 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:39:54 force-systemd-env-405089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:39:54 force-systemd-env-405089 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-405089 -n force-systemd-env-405089
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-405089 -n force-systemd-env-405089: exit status 6 (507.045289ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:39:55.839982 2457583 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-405089" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-405089" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-405089" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-405089
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-405089: (2.04114737s)
--- FAIL: TestForceSystemdEnv (508.79s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 8.62
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 4.29
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.1
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
22 TestOffline 77.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 137.85
29 TestAddons/serial/Volcano 43.57
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/FakeCredentials 9.96
35 TestAddons/parallel/Registry 16.67
36 TestAddons/parallel/RegistryCreds 0.89
37 TestAddons/parallel/Ingress 17.43
38 TestAddons/parallel/InspektorGadget 11.85
39 TestAddons/parallel/MetricsServer 7.02
41 TestAddons/parallel/CSI 43.1
42 TestAddons/parallel/Headlamp 18.03
43 TestAddons/parallel/CloudSpanner 6.58
44 TestAddons/parallel/LocalPath 53.15
45 TestAddons/parallel/NvidiaDevicePlugin 5.5
46 TestAddons/parallel/Yakd 11.92
48 TestAddons/StoppedEnableDisable 11.44
49 TestCertOptions 34.1
50 TestCertExpiration 247.56
51 TestDockerFlags 38.25
58 TestErrorSpam/setup 29.15
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.17
61 TestErrorSpam/pause 1.54
62 TestErrorSpam/unpause 1.61
63 TestErrorSpam/stop 11.31
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 66.71
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.83
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.73
75 TestFunctional/serial/CacheCmd/cache/add_local 1
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 41.27
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.23
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.51
90 TestFunctional/parallel/DashboardCmd 15.21
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.09
97 TestFunctional/parallel/ServiceCmdConnect 7.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 21.17
101 TestFunctional/parallel/SSHCmd 0.74
102 TestFunctional/parallel/CpCmd 2.52
104 TestFunctional/parallel/FileSync 0.41
105 TestFunctional/parallel/CertSync 2.31
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.33
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 8.37
130 TestFunctional/parallel/ServiceCmd/List 0.56
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.72
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
133 TestFunctional/parallel/ServiceCmd/Format 0.44
134 TestFunctional/parallel/ServiceCmd/URL 0.39
135 TestFunctional/parallel/MountCmd/specific-port 2.24
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.72
137 TestFunctional/parallel/Version/short 0.12
138 TestFunctional/parallel/Version/components 1.26
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.94
144 TestFunctional/parallel/ImageCommands/Setup 0.6
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.96
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.83
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.08
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/parallel/DockerEnv/bash 1.19
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 148.2
164 TestMultiControlPlane/serial/DeployApp 8.85
165 TestMultiControlPlane/serial/PingHostFromPods 1.85
166 TestMultiControlPlane/serial/AddWorkerNode 36.1
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.21
169 TestMultiControlPlane/serial/CopyFile 20.9
170 TestMultiControlPlane/serial/StopSecondaryNode 12.12
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
172 TestMultiControlPlane/serial/RestartSecondaryNode 44.73
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.07
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 163.46
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.91
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.87
177 TestMultiControlPlane/serial/StopCluster 33.69
178 TestMultiControlPlane/serial/RestartCluster 67.7
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.86
180 TestMultiControlPlane/serial/AddSecondaryNode 61.14
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.14
184 TestImageBuild/serial/Setup 28.56
185 TestImageBuild/serial/NormalBuild 1.96
186 TestImageBuild/serial/BuildWithBuildArg 1.1
187 TestImageBuild/serial/BuildWithDockerIgnore 1.26
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.72
193 TestJSONOutput/start/Command 67.48
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.66
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.59
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 11.2
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.23
218 TestKicCustomNetwork/create_custom_network 30.22
219 TestKicCustomNetwork/use_default_bridge_network 31.13
220 TestKicExistingNetwork 30.31
221 TestKicCustomSubnet 31.02
222 TestKicStaticIP 31.98
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 62.08
227 TestMountStart/serial/StartWithMountFirst 9.97
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 7.74
230 TestMountStart/serial/VerifyMountSecond 0.27
231 TestMountStart/serial/DeleteFirst 1.59
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.3
234 TestMountStart/serial/RestartStopped 8.86
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 85.55
239 TestMultiNode/serial/DeployApp2Nodes 6.13
240 TestMultiNode/serial/PingHostFrom2Pods 1.02
241 TestMultiNode/serial/AddNode 32.46
242 TestMultiNode/serial/MultiNodeLabels 0.1
243 TestMultiNode/serial/ProfileList 0.73
244 TestMultiNode/serial/CopyFile 10.75
245 TestMultiNode/serial/StopNode 2.49
246 TestMultiNode/serial/StartAfterStop 9.9
247 TestMultiNode/serial/RestartKeepsNodes 74.49
248 TestMultiNode/serial/DeleteNode 5.85
249 TestMultiNode/serial/StopMultiNode 22.16
250 TestMultiNode/serial/RestartMultiNode 52.05
251 TestMultiNode/serial/ValidateNameConflict 32.08
258 TestScheduledStopUnix 102.71
259 TestSkaffold 137.68
261 TestInsufficientStorage 13.72
262 TestRunningBinaryUpgrade 331.1
264 TestKubernetesUpgrade 344.04
265 TestMissingContainerUpgrade 87.11
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
268 TestNoKubernetes/serial/StartWithK8s 37.25
269 TestNoKubernetes/serial/StartWithStopK8s 13.81
270 TestNoKubernetes/serial/Start 6.61
271 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
273 TestNoKubernetes/serial/ProfileList 1.12
274 TestNoKubernetes/serial/Stop 1.32
275 TestNoKubernetes/serial/StartNoArgs 8.47
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
288 TestStoppedBinaryUpgrade/Setup 1.01
289 TestStoppedBinaryUpgrade/Upgrade 342.11
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
298 TestPreload/Start-NoPreload-PullImage 85.45
300 TestPause/serial/Start 78.87
301 TestPreload/Restart-With-Preload-Check-User-Image 56.85
303 TestNetworkPlugins/group/auto/Start 76.8
304 TestPause/serial/SecondStartNoReconfiguration 45.13
305 TestPause/serial/Pause 0.69
306 TestPause/serial/VerifyStatus 0.37
307 TestPause/serial/Unpause 0.6
308 TestPause/serial/PauseAgain 0.94
309 TestPause/serial/DeletePaused 2.37
310 TestPause/serial/VerifyDeletedResources 0.44
311 TestNetworkPlugins/group/kindnet/Start 52.14
312 TestNetworkPlugins/group/auto/KubeletFlags 0.43
313 TestNetworkPlugins/group/auto/NetCatPod 11.37
314 TestNetworkPlugins/group/auto/DNS 0.45
315 TestNetworkPlugins/group/auto/Localhost 0.34
316 TestNetworkPlugins/group/auto/HairPin 0.28
317 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/Start 74.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.6
320 TestNetworkPlugins/group/kindnet/NetCatPod 11.44
321 TestNetworkPlugins/group/kindnet/DNS 0.24
322 TestNetworkPlugins/group/kindnet/Localhost 0.2
323 TestNetworkPlugins/group/kindnet/HairPin 0.23
324 TestNetworkPlugins/group/custom-flannel/Start 51.11
325 TestNetworkPlugins/group/calico/ControllerPod 6.01
326 TestNetworkPlugins/group/calico/KubeletFlags 0.45
327 TestNetworkPlugins/group/calico/NetCatPod 11.41
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
330 TestNetworkPlugins/group/calico/DNS 0.24
331 TestNetworkPlugins/group/calico/Localhost 0.22
332 TestNetworkPlugins/group/calico/HairPin 0.26
333 TestNetworkPlugins/group/custom-flannel/DNS 0.27
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
336 TestNetworkPlugins/group/false/Start 74.25
337 TestNetworkPlugins/group/enable-default-cni/Start 73.29
338 TestNetworkPlugins/group/false/KubeletFlags 0.34
339 TestNetworkPlugins/group/false/NetCatPod 10.31
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
342 TestNetworkPlugins/group/false/DNS 0.19
343 TestNetworkPlugins/group/false/Localhost 0.19
344 TestNetworkPlugins/group/false/HairPin 0.2
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.33
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
348 TestNetworkPlugins/group/flannel/Start 59.82
349 TestNetworkPlugins/group/bridge/Start 80.6
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
352 TestNetworkPlugins/group/flannel/NetCatPod 10.3
353 TestNetworkPlugins/group/flannel/DNS 0.19
354 TestNetworkPlugins/group/flannel/Localhost 0.16
355 TestNetworkPlugins/group/flannel/HairPin 0.17
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
357 TestNetworkPlugins/group/bridge/NetCatPod 11.39
358 TestNetworkPlugins/group/kubenet/Start 78.62
359 TestNetworkPlugins/group/bridge/DNS 0.25
360 TestNetworkPlugins/group/bridge/Localhost 0.22
361 TestNetworkPlugins/group/bridge/HairPin 0.21
363 TestStartStop/group/old-k8s-version/serial/FirstStart 90.76
364 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
365 TestNetworkPlugins/group/kubenet/NetCatPod 10.33
366 TestNetworkPlugins/group/kubenet/DNS 0.19
367 TestNetworkPlugins/group/kubenet/Localhost 0.2
368 TestNetworkPlugins/group/kubenet/HairPin 0.17
370 TestStartStop/group/embed-certs/serial/FirstStart 69.1
371 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
373 TestStartStop/group/old-k8s-version/serial/Stop 11.74
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/old-k8s-version/serial/SecondStart 60.17
376 TestStartStop/group/embed-certs/serial/DeployApp 9.34
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
378 TestStartStop/group/embed-certs/serial/Stop 11.32
379 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
381 TestStartStop/group/embed-certs/serial/SecondStart 30.38
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
383 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
384 TestStartStop/group/old-k8s-version/serial/Pause 4.43
386 TestStartStop/group/no-preload/serial/FirstStart 78.62
387 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.18
389 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
390 TestStartStop/group/embed-certs/serial/Pause 4.05
392 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.35
393 TestStartStop/group/no-preload/serial/DeployApp 10.33
394 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
395 TestStartStop/group/no-preload/serial/Stop 11.33
396 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
397 TestStartStop/group/no-preload/serial/SecondStart 51.97
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.55
399 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.37
400 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.58
403 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
404 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
405 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
406 TestStartStop/group/no-preload/serial/Pause 3.3
408 TestStartStop/group/newest-cni/serial/FirstStart 37.43
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.81
413 TestPreload/PreloadSrc/gcs 5.98
414 TestPreload/PreloadSrc/github 9.76
415 TestStartStop/group/newest-cni/serial/DeployApp 0
416 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.77
417 TestStartStop/group/newest-cni/serial/Stop 11.37
418 TestPreload/PreloadSrc/gcs-cached 0.5
419 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/newest-cni/serial/SecondStart 16.24
421 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
422 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
423 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
424 TestStartStop/group/newest-cni/serial/Pause 3.19
x
+
TestDownloadOnly/v1.28.0/json-events (8.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-636475 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-636475 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.617296278s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (8.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 01:53:44.449429 2222877 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0110 01:53:44.449517 2222877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-636475
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-636475: exit status 85 (90.870494ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-636475 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-636475 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:35.875406 2222882 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:35.875590 2222882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:35.875622 2222882 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:35.875643 2222882 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:35.875930 2222882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	W0110 01:53:35.876095 2222882 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22414-2221005/.minikube/config/config.json: open /home/jenkins/minikube-integration/22414-2221005/.minikube/config/config.json: no such file or directory
	I0110 01:53:35.876588 2222882 out.go:368] Setting JSON to true
	I0110 01:53:35.877443 2222882 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34565,"bootTime":1767975451,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 01:53:35.877545 2222882 start.go:143] virtualization:  
	I0110 01:53:35.882749 2222882 out.go:99] [download-only-636475] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0110 01:53:35.883027 2222882 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 01:53:35.883134 2222882 notify.go:221] Checking for updates...
	I0110 01:53:35.887872 2222882 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:35.891226 2222882 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:35.894301 2222882 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 01:53:35.897463 2222882 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 01:53:35.900485 2222882 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 01:53:35.906321 2222882 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:35.906600 2222882 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:35.927178 2222882 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 01:53:35.927270 2222882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:35.984598 2222882 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 01:53:35.97539275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:35.984711 2222882 docker.go:319] overlay module found
	I0110 01:53:35.987780 2222882 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:35.987831 2222882 start.go:309] selected driver: docker
	I0110 01:53:35.987839 2222882 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:35.987948 2222882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:36.046875 2222882 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 01:53:36.03749141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:36.047044 2222882 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:36.047371 2222882 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 01:53:36.047531 2222882 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:36.050643 2222882 out.go:171] Using Docker driver with root privileges
	I0110 01:53:36.053712 2222882 cni.go:84] Creating CNI manager for ""
	I0110 01:53:36.053790 2222882 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0110 01:53:36.053806 2222882 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0110 01:53:36.053880 2222882 start.go:353] cluster config:
	{Name:download-only-636475 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-636475 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:53:36.056908 2222882 out.go:99] Starting "download-only-636475" primary control-plane node in "download-only-636475" cluster
	I0110 01:53:36.056935 2222882 cache.go:134] Beginning downloading kic base image for docker with docker
	I0110 01:53:36.060010 2222882 out.go:99] Pulling base image v0.0.48-1767944074-22401 ...
	I0110 01:53:36.060061 2222882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 01:53:36.060262 2222882 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 01:53:36.076738 2222882 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:36.076934 2222882 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 01:53:36.077036 2222882 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:36.113943 2222882 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 01:53:36.113979 2222882 cache.go:65] Caching tarball of preloaded images
	I0110 01:53:36.114157 2222882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 01:53:36.117565 2222882 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0110 01:53:36.117607 2222882 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 01:53:36.117614 2222882 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I0110 01:53:36.196385 2222882 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I0110 01:53:36.196519 2222882 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0110 01:53:40.437844 2222882 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I0110 01:53:40.438334 2222882 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/download-only-636475/config.json ...
	I0110 01:53:40.438399 2222882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/download-only-636475/config.json: {Name:mkea45598d92594fc32209a51218d55b9a765c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:40.438587 2222882 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0110 01:53:40.438849 2222882 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-636475 host does not exist
	  To start a cluster, run: "minikube start -p download-only-636475"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-636475
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (4.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-736451 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-736451 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.286380237s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (4.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 01:53:49.192549 2222877 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I0110 01:53:49.192609 2222877 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-736451
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-736451: exit status 85 (97.031871ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-636475 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-636475 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-636475                                                                                                                                                       │ download-only-636475 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ -o=json --download-only -p download-only-736451 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-736451 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:44
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:44.955353 2223081 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:44.955531 2223081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:44.955544 2223081 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:44.955565 2223081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:44.955873 2223081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 01:53:44.956341 2223081 out.go:368] Setting JSON to true
	I0110 01:53:44.957294 2223081 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":34574,"bootTime":1767975451,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 01:53:44.957369 2223081 start.go:143] virtualization:  
	I0110 01:53:44.960938 2223081 out.go:99] [download-only-736451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 01:53:44.961249 2223081 notify.go:221] Checking for updates...
	I0110 01:53:44.964895 2223081 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:44.967870 2223081 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:44.970881 2223081 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 01:53:44.973713 2223081 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 01:53:44.976564 2223081 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 01:53:44.982188 2223081 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:44.982550 2223081 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:45.033222 2223081 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 01:53:45.033348 2223081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:45.198283 2223081 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 01:53:45.170304792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:45.198398 2223081 docker.go:319] overlay module found
	I0110 01:53:45.224325 2223081 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:45.224447 2223081 start.go:309] selected driver: docker
	I0110 01:53:45.224463 2223081 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:45.224588 2223081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:45.327875 2223081 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 01:53:45.318056295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:45.328071 2223081 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:45.328384 2223081 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 01:53:45.328573 2223081 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:45.331920 2223081 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-736451 host does not exist
	  To start a cluster, run: "minikube start -p download-only-736451"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-736451
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 01:53:50.395876 2222877 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-401011 --alsologtostderr --binary-mirror http://127.0.0.1:37415 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-401011" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-401011
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (77.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-420658 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-420658 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m14.781824457s)
helpers_test.go:176: Cleaning up "offline-docker-420658" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-420658
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-420658: (2.64290632s)
--- PASS: TestOffline (77.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-991766
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-991766: exit status 85 (78.606849ms)

                                                
                                                
-- stdout --
	* Profile "addons-991766" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991766"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-991766
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-991766: exit status 85 (81.716534ms)

                                                
                                                
-- stdout --
	* Profile "addons-991766" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-991766"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (137.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-991766 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-991766 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.844917094s)
--- PASS: TestAddons/Setup (137.85s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.57s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 37.189343ms
addons_test.go:886: volcano-controller stabilized in 37.902148ms
addons_test.go:870: volcano-scheduler stabilized in 38.018642ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-dvdrl" [263cf5f0-64c7-47d1-b34e-1ffdbd5c729d] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.035351564s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-lks77" [deb52dc6-9936-4482-b927-bcc462dbf2b0] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004430263s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-b6hcm" [af416fe0-7210-4ba7-a7f6-a80a321ef4c2] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004209082s
addons_test.go:905: (dbg) Run:  kubectl --context addons-991766 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-991766 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-991766 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [df58f951-6bb2-4ca2-9690-1d0cb033692d] Pending
helpers_test.go:353: "test-job-nginx-0" [df58f951-6bb2-4ca2-9690-1d0cb033692d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [df58f951-6bb2-4ca2-9690-1d0cb033692d] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003857082s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable volcano --alsologtostderr -v=1: (11.851880693s)
--- PASS: TestAddons/serial/Volcano (43.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-991766 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-991766 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-991766 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-991766 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [75067020-d225-4da0-b1d6-4dbe27ca5e12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [75067020-d225-4da0-b1d6-4dbe27ca5e12] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003499884s
addons_test.go:696: (dbg) Run:  kubectl --context addons-991766 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-991766 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-991766 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-991766 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.362213ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-hzj2x" [3f78a122-f1b4-44a9-ae26-07250f620611] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005823358s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-tnrdw" [4581c883-d7ed-48fa-b5a6-755641a16c5d] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003229252s
addons_test.go:394: (dbg) Run:  kubectl --context addons-991766 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-991766 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-991766 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.723509047s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 ip
2026/01/10 01:57:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.67s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.20362ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-991766
addons_test.go:334: (dbg) Run:  kubectl --context addons-991766 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-991766 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-991766 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-991766 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2bbb22ea-b0d2-4348-bec4-c9ed939d215f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2bbb22ea-b0d2-4348-bec4-c9ed939d215f] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003891939s
I0110 01:57:55.349029 2222877 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-991766 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable ingress-dns --alsologtostderr -v=1: (1.707400263s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable ingress --alsologtostderr -v=1: (7.819253813s)
--- PASS: TestAddons/parallel/Ingress (17.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-vxj6p" [15ffd36c-70cb-4d35-aeb9-42b30c864892] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003446281s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable inspektor-gadget --alsologtostderr -v=1: (5.845749734s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.620263ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-6zh2x" [fda964a3-6e9c-4398-bf04-2b886688cf5b] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00431269s
addons_test.go:465: (dbg) Run:  kubectl --context addons-991766 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 01:57:28.490761 2222877 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 01:57:28.495753 2222877 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 01:57:28.495788 2222877 kapi.go:107] duration metric: took 9.003563ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 9.014542ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-991766 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-991766 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [f711c086-a1d1-4792-93b1-d2b674128467] Pending
helpers_test.go:353: "task-pv-pod" [f711c086-a1d1-4792-93b1-d2b674128467] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [f711c086-a1d1-4792-93b1-d2b674128467] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003760866s
addons_test.go:574: (dbg) Run:  kubectl --context addons-991766 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-991766 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-991766 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-991766 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-991766 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-991766 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-991766 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [5ae65ae3-2a3e-402e-9865-d4f46e6d1279] Pending
helpers_test.go:353: "task-pv-pod-restore" [5ae65ae3-2a3e-402e-9865-d4f46e6d1279] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [5ae65ae3-2a3e-402e-9865-d4f46e6d1279] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005504176s
addons_test.go:616: (dbg) Run:  kubectl --context addons-991766 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-991766 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-991766 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.870596213s)
--- PASS: TestAddons/parallel/CSI (43.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-991766 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-991766 --alsologtostderr -v=1: (1.094775164s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-gkmn9" [9beea3e2-88a4-4a7d-896d-26ec00cce6c3] Pending
helpers_test.go:353: "headlamp-6d8d595f-gkmn9" [9beea3e2-88a4-4a7d-896d-26ec00cce6c3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-gkmn9" [9beea3e2-88a4-4a7d-896d-26ec00cce6c3] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003005635s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable headlamp --alsologtostderr -v=1: (5.934213468s)
--- PASS: TestAddons/parallel/Headlamp (18.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-77j78" [632bcebc-e997-49c0-ac73-131914351e97] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003942327s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-991766 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-991766 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [3e4771be-5be0-41b7-ad21-33e081da3ba6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [3e4771be-5be0-41b7-ad21-33e081da3ba6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [3e4771be-5be0-41b7-ad21-33e081da3ba6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003059404s
addons_test.go:969: (dbg) Run:  kubectl --context addons-991766 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 ssh "cat /opt/local-path-provisioner/pvc-3e38a2fa-300c-47aa-8309-5c1edb8625b1_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-991766 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-991766 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.92508173s)
--- PASS: TestAddons/parallel/LocalPath (53.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-q5l9p" [ad9c1fc6-8880-41a0-b7f1-a9cb35968e37] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003462484s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-z7fs6" [53b02731-53a6-4186-84af-05f72d88ac13] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003856953s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-991766 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-991766 addons disable yakd --alsologtostderr -v=1: (5.916383888s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-991766
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-991766: (11.168014143s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-991766
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-991766
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-991766
--- PASS: TestAddons/StoppedEnableDisable (11.44s)

                                                
                                    
x
+
TestCertOptions (34.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-811152 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-811152 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (30.972451205s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-811152 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-811152 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-811152 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-811152" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-811152
E0110 02:41:09.096307 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-811152: (2.340987439s)
--- PASS: TestCertOptions (34.10s)

                                                
                                    
x
+
TestCertExpiration (247.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-610349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0110 02:40:13.569142 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-610349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (36.173391952s)
E0110 02:40:58.309183 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-610349 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-610349 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.066838398s)
helpers_test.go:176: Cleaning up "cert-expiration-610349" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-610349
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-610349: (2.318080459s)
--- PASS: TestCertExpiration (247.56s)

                                                
                                    
x
+
TestDockerFlags (38.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-446176 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-446176 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.705932172s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-446176 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-446176 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-446176" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-446176
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-446176: (2.823212363s)
--- PASS: TestDockerFlags (38.25s)

                                                
                                    
x
+
TestErrorSpam/setup (29.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-468906 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-468906 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-468906 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-468906 --driver=docker  --container-runtime=docker: (29.149876727s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (29.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (11.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 stop: (11.092309759s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-468906 --log_dir /tmp/nospam-468906 stop
--- PASS: TestErrorSpam/stop (11.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22414-2221005/.minikube/files/etc/test/nested/copy/2222877/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0110 02:01:09.099764 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.105874 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.117546 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.137834 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.178148 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.258538 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.419006 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:09.739595 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:10.380643 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:11.661198 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-394803 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.714000631s)
--- PASS: TestFunctional/serial/StartWithProxy (66.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 02:01:13.789409 2222877 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --alsologtostderr -v=8
E0110 02:01:14.222008 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:19.342281 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:29.583351 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:50.063509 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-394803 --alsologtostderr -v=8: (43.828592699s)
functional_test.go:678: soft start took 43.833445828s for "functional-394803" cluster.
I0110 02:01:57.618336 2222877 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (43.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-394803 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:3.1: (1.185308475s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:3.3: (1.38683683s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 cache add registry.k8s.io/pause:latest: (1.153400729s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-394803 /tmp/TestFunctionalserialCacheCmdcacheadd_local2072712846/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache add minikube-local-cache-test:functional-394803
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache delete minikube-local-cache-test:functional-394803
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-394803
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.553119ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 kubectl -- --context functional-394803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-394803 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0110 02:02:31.024325 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-394803 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.267961858s)
functional_test.go:776: restart took 41.268053369s for "functional-394803" cluster.
I0110 02:02:46.265891 2222877 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (41.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-394803 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 logs: (1.233221687s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 logs --file /tmp/TestFunctionalserialLogsFileCmd4032939600/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 logs --file /tmp/TestFunctionalserialLogsFileCmd4032939600/001/logs.txt: (1.246335791s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-394803 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-394803
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-394803: exit status 115 (395.090734ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30688 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-394803 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 config get cpus: exit status 14 (68.722372ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 config get cpus: exit status 14 (76.561505ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-394803 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-394803 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 2264918: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-394803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (193.369112ms)

                                                
                                                
-- stdout --
	* [functional-394803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:03:22.780472 2264590 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:03:22.780663 2264590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:03:22.780696 2264590 out.go:374] Setting ErrFile to fd 2...
	I0110 02:03:22.780720 2264590 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:03:22.781178 2264590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:03:22.781674 2264590 out.go:368] Setting JSON to false
	I0110 02:03:22.782714 2264590 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35152,"bootTime":1767975451,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:03:22.782844 2264590 start.go:143] virtualization:  
	I0110 02:03:22.786141 2264590 out.go:179] * [functional-394803] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:03:22.790147 2264590 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:03:22.790232 2264590 notify.go:221] Checking for updates...
	I0110 02:03:22.797183 2264590 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:03:22.800178 2264590 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:03:22.803103 2264590 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:03:22.806190 2264590 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:03:22.809147 2264590 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:03:22.812491 2264590 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:03:22.813261 2264590 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:03:22.844290 2264590 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:03:22.844400 2264590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:03:22.908525 2264590 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:03:22.896586386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:03:22.908626 2264590 docker.go:319] overlay module found
	I0110 02:03:22.911929 2264590 out.go:179] * Using the docker driver based on existing profile
	I0110 02:03:22.914730 2264590 start.go:309] selected driver: docker
	I0110 02:03:22.914750 2264590 start.go:928] validating driver "docker" against &{Name:functional-394803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-394803 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:03:22.914859 2264590 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:03:22.918462 2264590 out.go:203] 
	W0110 02:03:22.921322 2264590 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 02:03:22.924211 2264590 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-394803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-394803 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (211.774206ms)

                                                
                                                
-- stdout --
	* [functional-394803] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:03:22.583009 2264545 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:03:22.583207 2264545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:03:22.583241 2264545 out.go:374] Setting ErrFile to fd 2...
	I0110 02:03:22.583261 2264545 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:03:22.584218 2264545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:03:22.584711 2264545 out.go:368] Setting JSON to false
	I0110 02:03:22.585791 2264545 start.go:133] hostinfo: {"hostname":"ip-172-31-29-130","uptime":35152,"bootTime":1767975451,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0110 02:03:22.585900 2264545 start.go:143] virtualization:  
	I0110 02:03:22.589504 2264545 out.go:179] * [functional-394803] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0110 02:03:22.592631 2264545 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:03:22.592702 2264545 notify.go:221] Checking for updates...
	I0110 02:03:22.598714 2264545 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:03:22.601704 2264545 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	I0110 02:03:22.604660 2264545 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	I0110 02:03:22.607591 2264545 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:03:22.610560 2264545 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:03:22.613786 2264545 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:03:22.614371 2264545 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:03:22.639327 2264545 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:03:22.639454 2264545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:03:22.714605 2264545 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:03:22.692965972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:03:22.714711 2264545 docker.go:319] overlay module found
	I0110 02:03:22.717752 2264545 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 02:03:22.720670 2264545 start.go:309] selected driver: docker
	I0110 02:03:22.720689 2264545 start.go:928] validating driver "docker" against &{Name:functional-394803 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-394803 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:03:22.720877 2264545 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:03:22.724447 2264545 out.go:203] 
	W0110 02:03:22.727347 2264545 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 02:03:22.730180 2264545 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-394803 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-394803 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-75sht" [8db507f4-0c2f-4f3f-9bc0-3b0d5697fd67] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-75sht" [8db507f4-0c2f-4f3f-9bc0-3b0d5697fd67] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003488196s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:30544
functional_test.go:1685: http://192.168.49.2:30544: success! body:
Request served by hello-node-connect-5d95464fd4-75sht

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30544
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b2398913-6d73-4a5a-adca-88bd2aae7af4] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003814637s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-394803 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-394803 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-394803 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-394803 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [06fea326-6180-4d67-85d3-16198f2adb98] Pending
helpers_test.go:353: "sp-pod" [06fea326-6180-4d67-85d3-16198f2adb98] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [06fea326-6180-4d67-85d3-16198f2adb98] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00401204s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-394803 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-394803 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-394803 delete -f testdata/storage-provisioner/pod.yaml: (1.132474938s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-394803 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [746849da-6485-4e1d-8391-329fb09b1e93] Pending
helpers_test.go:353: "sp-pod" [746849da-6485-4e1d-8391-329fb09b1e93] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003137718s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-394803 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh -n functional-394803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cp functional-394803:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3693250737/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh -n functional-394803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh -n functional-394803 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/2222877/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /etc/test/nested/copy/2222877/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/2222877.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /etc/ssl/certs/2222877.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/2222877.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /usr/share/ca-certificates/2222877.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/22228772.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /etc/ssl/certs/22228772.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/22228772.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /usr/share/ca-certificates/22228772.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-394803 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh "sudo systemctl is-active crio": exit status 1 (331.023363ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 2261590: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-394803 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [6d0ffe6e-76ae-4ab5-af85-1afc3174605c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [6d0ffe6e-76ae-4ab5-af85-1afc3174605c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00478501s
I0110 02:03:04.707015 2222877 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-394803 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.197.127 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-394803 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-394803 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-394803 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-f66mv" [46c20111-24e8-430e-b759-aa444372136f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-f66mv" [46c20111-24e8-430e-b759-aa444372136f] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00402242s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "358.114299ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "58.615834ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "380.793453ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "52.678953ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdany-port2259675860/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768010598359728635" to /tmp/TestFunctionalparallelMountCmdany-port2259675860/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768010598359728635" to /tmp/TestFunctionalparallelMountCmdany-port2259675860/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768010598359728635" to /tmp/TestFunctionalparallelMountCmdany-port2259675860/001/test-1768010598359728635
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (360.412509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 02:03:18.722232 2222877 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 02:03 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 02:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 02:03 test-1768010598359728635
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh cat /mount-9p/test-1768010598359728635
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-394803 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f7a7ee03-f4b6-40e7-8229-87e172793fe8] Pending
helpers_test.go:353: "busybox-mount" [f7a7ee03-f4b6-40e7-8229-87e172793fe8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f7a7ee03-f4b6-40e7-8229-87e172793fe8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f7a7ee03-f4b6-40e7-8229-87e172793fe8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004348873s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-394803 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdany-port2259675860/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service list -o json
functional_test.go:1509: Took "719.229016ms" to run "out/minikube-linux-arm64 -p functional-394803 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31954
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31954
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdspecific-port2129647660/001:/mount-9p --alsologtostderr -v=1 --port 33703]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (685.645282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 02:03:27.412380 2222877 retry.go:84] will retry after 300ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdspecific-port2129647660/001:/mount-9p --alsologtostderr -v=1 --port 33703] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh "sudo umount -f /mount-9p": exit status 1 (329.436932ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-394803 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdspecific-port2129647660/001:/mount-9p --alsologtostderr -v=1 --port 33703] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T" /mount1: exit status 1 (1.014666333s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-394803 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-394803 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3013927156/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 version -o=json --components: (1.262592145s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-394803 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-394803
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-394803 image ls --format short --alsologtostderr:
I0110 02:03:40.369296 2267782 out.go:360] Setting OutFile to fd 1 ...
I0110 02:03:40.369410 2267782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:40.369423 2267782 out.go:374] Setting ErrFile to fd 2...
I0110 02:03:40.369428 2267782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:40.369682 2267782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
I0110 02:03:40.370292 2267782 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:40.370434 2267782 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:40.370994 2267782 cli_runner.go:164] Run: docker container inspect functional-394803 --format={{.State.Status}}
I0110 02:03:40.401909 2267782 ssh_runner.go:195] Run: systemctl --version
I0110 02:03:40.401961 2267782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-394803
I0110 02:03:40.423324 2267782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34763 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/functional-394803/id_rsa Username:docker}
I0110 02:03:40.560484 2267782 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-394803 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 611c6647fcbbc │ 61.2MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-394803 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/library/minikube-local-cache-test       │ functional-394803 │ c73df891578b1 │ 30B    │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-394803 image ls --format table --alsologtostderr:
I0110 02:03:41.759095 2268230 out.go:360] Setting OutFile to fd 1 ...
I0110 02:03:41.759273 2268230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.759283 2268230 out.go:374] Setting ErrFile to fd 2...
I0110 02:03:41.759290 2268230 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.759541 2268230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
I0110 02:03:41.760151 2268230 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.760280 2268230 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.760854 2268230 cli_runner.go:164] Run: docker container inspect functional-394803 --format={{.State.Status}}
I0110 02:03:41.780910 2268230 ssh_runner.go:195] Run: systemctl --version
I0110 02:03:41.780983 2268230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-394803
I0110 02:03:41.803448 2268230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34763 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/functional-394803/id_rsa Username:docker}
I0110 02:03:41.907847 2268230 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-394803 image ls --format json --alsologtostderr:
[{"id":"c73df891578b14236768d64b71e2eba0a8e95d43ede6cb38f1ddeb37c3fa7e6d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-394803"],"size":"30"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"61200000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9f
d9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"e08f4d9d2e6ede818506
4c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-394803 image ls --format json --alsologtostderr:
I0110 02:03:41.501482 2268143 out.go:360] Setting OutFile to fd 1 ...
I0110 02:03:41.501728 2268143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.501757 2268143 out.go:374] Setting ErrFile to fd 2...
I0110 02:03:41.501776 2268143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.502202 2268143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
I0110 02:03:41.502949 2268143 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.503148 2268143 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.503865 2268143 cli_runner.go:164] Run: docker container inspect functional-394803 --format={{.State.Status}}
I0110 02:03:41.525186 2268143 ssh_runner.go:195] Run: systemctl --version
I0110 02:03:41.525237 2268143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-394803
I0110 02:03:41.545900 2268143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34763 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/functional-394803/id_rsa Username:docker}
I0110 02:03:41.656827 2268143 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-394803 image ls --format yaml --alsologtostderr:
- id: c73df891578b14236768d64b71e2eba0a8e95d43ede6cb38f1ddeb37c3fa7e6d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-394803
size: "30"
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "61200000"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-394803 image ls --format yaml --alsologtostderr:
I0110 02:03:41.226166 2268066 out.go:360] Setting OutFile to fd 1 ...
I0110 02:03:41.226379 2268066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.226405 2268066 out.go:374] Setting ErrFile to fd 2...
I0110 02:03:41.226423 2268066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.226705 2268066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
I0110 02:03:41.228120 2268066 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.228310 2268066 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.228893 2268066 cli_runner.go:164] Run: docker container inspect functional-394803 --format={{.State.Status}}
I0110 02:03:41.261482 2268066 ssh_runner.go:195] Run: systemctl --version
I0110 02:03:41.261537 2268066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-394803
I0110 02:03:41.282486 2268066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34763 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/functional-394803/id_rsa Username:docker}
I0110 02:03:41.388946 2268066 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-394803 ssh pgrep buildkitd: exit status 1 (343.611412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image build -t localhost/my-image:functional-394803 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-394803 image build -t localhost/my-image:functional-394803 testdata/build --alsologtostderr: (3.362846376s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-394803 image build -t localhost/my-image:functional-394803 testdata/build --alsologtostderr:
I0110 02:03:41.192987 2268065 out.go:360] Setting OutFile to fd 1 ...
I0110 02:03:41.193840 2268065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.193881 2268065 out.go:374] Setting ErrFile to fd 2...
I0110 02:03:41.193902 2268065 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:03:41.194327 2268065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
I0110 02:03:41.195878 2268065 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.197976 2268065 config.go:182] Loaded profile config "functional-394803": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I0110 02:03:41.198626 2268065 cli_runner.go:164] Run: docker container inspect functional-394803 --format={{.State.Status}}
I0110 02:03:41.227664 2268065 ssh_runner.go:195] Run: systemctl --version
I0110 02:03:41.227713 2268065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-394803
I0110 02:03:41.248676 2268065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34763 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/functional-394803/id_rsa Username:docker}
I0110 02:03:41.355698 2268065 build_images.go:162] Building image from path: /tmp/build.3978035647.tar
I0110 02:03:41.355769 2268065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 02:03:41.364759 2268065 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3978035647.tar
I0110 02:03:41.368689 2268065 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3978035647.tar: stat -c "%s %y" /var/lib/minikube/build/build.3978035647.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3978035647.tar': No such file or directory
I0110 02:03:41.368764 2268065 ssh_runner.go:362] scp /tmp/build.3978035647.tar --> /var/lib/minikube/build/build.3978035647.tar (3072 bytes)
I0110 02:03:41.391658 2268065 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3978035647
I0110 02:03:41.409218 2268065 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3978035647 -xf /var/lib/minikube/build/build.3978035647.tar
I0110 02:03:41.426359 2268065 docker.go:364] Building image: /var/lib/minikube/build/build.3978035647
I0110 02:03:41.426442 2268065 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-394803 /var/lib/minikube/build/build.3978035647
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:cfaf1766937d85fcfef6fd73a2677b0f691a4e4d90fb3d306ca140fe7f551798 done
#8 naming to localhost/my-image:functional-394803 done
#8 DONE 0.1s
I0110 02:03:44.477204 2268065 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-394803 /var/lib/minikube/build/build.3978035647: (3.050739261s)
I0110 02:03:44.477279 2268065 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3978035647
I0110 02:03:44.485691 2268065 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3978035647.tar
I0110 02:03:44.494106 2268065 build_images.go:218] Built localhost/my-image:functional-394803 from /tmp/build.3978035647.tar
I0110 02:03:44.494139 2268065 build_images.go:134] succeeded building to: functional-394803
I0110 02:03:44.494144 2268065 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-394803 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-394803 docker-env) && out/minikube-linux-arm64 status -p functional-394803"
2026/01/10 02:03:38 [DEBUG] GET http://127.0.0.1:37337/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-394803 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-394803
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-394803
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-394803
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (148.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0110 02:03:52.945452 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:06:09.095551 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m27.199141403s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: (1.0035368s)
--- PASS: TestMultiControlPlane/serial/StartCluster (148.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 kubectl -- rollout status deployment/busybox: (5.841048364s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-2d8zj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-fm6wj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-lkfq9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-2d8zj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-fm6wj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-lkfq9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-2d8zj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-fm6wj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-lkfq9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-2d8zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-2d8zj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-fm6wj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-fm6wj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-lkfq9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 kubectl -- exec busybox-769dd8b7dd-lkfq9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node add --alsologtostderr -v 5
E0110 02:06:36.785679 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 node add --alsologtostderr -v 5: (34.965052903s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: (1.132861023s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-942409 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.21104196s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --output json --alsologtostderr -v 5: (1.07539992s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp testdata/cp-test.txt ha-942409:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4242622999/001/cp-test_ha-942409.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409:/home/docker/cp-test.txt ha-942409-m02:/home/docker/cp-test_ha-942409_ha-942409-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test_ha-942409_ha-942409-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409:/home/docker/cp-test.txt ha-942409-m03:/home/docker/cp-test_ha-942409_ha-942409-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test_ha-942409_ha-942409-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409:/home/docker/cp-test.txt ha-942409-m04:/home/docker/cp-test_ha-942409_ha-942409-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test_ha-942409_ha-942409-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp testdata/cp-test.txt ha-942409-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4242622999/001/cp-test_ha-942409-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m02:/home/docker/cp-test.txt ha-942409:/home/docker/cp-test_ha-942409-m02_ha-942409.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test_ha-942409-m02_ha-942409.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m02:/home/docker/cp-test.txt ha-942409-m03:/home/docker/cp-test_ha-942409-m02_ha-942409-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test_ha-942409-m02_ha-942409-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m02:/home/docker/cp-test.txt ha-942409-m04:/home/docker/cp-test_ha-942409-m02_ha-942409-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test_ha-942409-m02_ha-942409-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp testdata/cp-test.txt ha-942409-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4242622999/001/cp-test_ha-942409-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m03:/home/docker/cp-test.txt ha-942409:/home/docker/cp-test_ha-942409-m03_ha-942409.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test_ha-942409-m03_ha-942409.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m03:/home/docker/cp-test.txt ha-942409-m02:/home/docker/cp-test_ha-942409-m03_ha-942409-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test_ha-942409-m03_ha-942409-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m03:/home/docker/cp-test.txt ha-942409-m04:/home/docker/cp-test_ha-942409-m03_ha-942409-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test_ha-942409-m03_ha-942409-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp testdata/cp-test.txt ha-942409-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4242622999/001/cp-test_ha-942409-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m04:/home/docker/cp-test.txt ha-942409:/home/docker/cp-test_ha-942409-m04_ha-942409.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409 "sudo cat /home/docker/cp-test_ha-942409-m04_ha-942409.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m04:/home/docker/cp-test.txt ha-942409-m02:/home/docker/cp-test_ha-942409-m04_ha-942409-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m02 "sudo cat /home/docker/cp-test_ha-942409-m04_ha-942409-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 cp ha-942409-m04:/home/docker/cp-test.txt ha-942409-m03:/home/docker/cp-test_ha-942409-m04_ha-942409-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 ssh -n ha-942409-m03 "sudo cat /home/docker/cp-test_ha-942409-m04_ha-942409-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 node stop m02 --alsologtostderr -v 5: (11.266361533s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: exit status 7 (854.721141ms)

                                                
                                                
-- stdout --
	ha-942409
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942409-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942409-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-942409-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:07:36.055786 2290177 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:07:36.055994 2290177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:07:36.056008 2290177 out.go:374] Setting ErrFile to fd 2...
	I0110 02:07:36.056014 2290177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:07:36.056302 2290177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:07:36.056563 2290177 out.go:368] Setting JSON to false
	I0110 02:07:36.056621 2290177 mustload.go:66] Loading cluster: ha-942409
	I0110 02:07:36.056712 2290177 notify.go:221] Checking for updates...
	I0110 02:07:36.058256 2290177 config.go:182] Loaded profile config "ha-942409": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:07:36.058292 2290177 status.go:174] checking status of ha-942409 ...
	I0110 02:07:36.060094 2290177 cli_runner.go:164] Run: docker container inspect ha-942409 --format={{.State.Status}}
	I0110 02:07:36.087602 2290177 status.go:371] ha-942409 host status = "Running" (err=<nil>)
	I0110 02:07:36.087629 2290177 host.go:66] Checking if "ha-942409" exists ...
	I0110 02:07:36.087933 2290177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-942409
	I0110 02:07:36.126963 2290177 host.go:66] Checking if "ha-942409" exists ...
	I0110 02:07:36.127271 2290177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:07:36.127335 2290177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-942409
	I0110 02:07:36.159564 2290177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/ha-942409/id_rsa Username:docker}
	I0110 02:07:36.262813 2290177 ssh_runner.go:195] Run: systemctl --version
	I0110 02:07:36.269783 2290177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:07:36.283837 2290177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:07:36.349721 2290177 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-10 02:07:36.338875035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:07:36.350241 2290177 kubeconfig.go:125] found "ha-942409" server: "https://192.168.49.254:8443"
	I0110 02:07:36.350288 2290177 api_server.go:166] Checking apiserver status ...
	I0110 02:07:36.350341 2290177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:07:36.364846 2290177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2117/cgroup
	I0110 02:07:36.373692 2290177 api_server.go:192] apiserver freezer: "4:freezer:/docker/27bcbf117845c660c8b0b9eb29a12c7f1c0a6ea7f196b2f1d7d2f8c5c7560b8b/kubepods/burstable/podc1bef432fa2cf79a244ff23796cca7ee/b94f5d88f55ca64f5cc637ba0d9b03ed37a1f91d2b030dd33c1e7aa40e289893"
	I0110 02:07:36.373767 2290177 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/27bcbf117845c660c8b0b9eb29a12c7f1c0a6ea7f196b2f1d7d2f8c5c7560b8b/kubepods/burstable/podc1bef432fa2cf79a244ff23796cca7ee/b94f5d88f55ca64f5cc637ba0d9b03ed37a1f91d2b030dd33c1e7aa40e289893/freezer.state
	I0110 02:07:36.381191 2290177 api_server.go:214] freezer state: "THAWED"
	I0110 02:07:36.381216 2290177 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:07:36.389689 2290177 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:07:36.389729 2290177 status.go:463] ha-942409 apiserver status = Running (err=<nil>)
	I0110 02:07:36.389745 2290177 status.go:176] ha-942409 status: &{Name:ha-942409 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:07:36.389766 2290177 status.go:174] checking status of ha-942409-m02 ...
	I0110 02:07:36.390118 2290177 cli_runner.go:164] Run: docker container inspect ha-942409-m02 --format={{.State.Status}}
	I0110 02:07:36.410629 2290177 status.go:371] ha-942409-m02 host status = "Stopped" (err=<nil>)
	I0110 02:07:36.410656 2290177 status.go:384] host is not running, skipping remaining checks
	I0110 02:07:36.410663 2290177 status.go:176] ha-942409-m02 status: &{Name:ha-942409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:07:36.410683 2290177 status.go:174] checking status of ha-942409-m03 ...
	I0110 02:07:36.411057 2290177 cli_runner.go:164] Run: docker container inspect ha-942409-m03 --format={{.State.Status}}
	I0110 02:07:36.428712 2290177 status.go:371] ha-942409-m03 host status = "Running" (err=<nil>)
	I0110 02:07:36.428740 2290177 host.go:66] Checking if "ha-942409-m03" exists ...
	I0110 02:07:36.429076 2290177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-942409-m03
	I0110 02:07:36.446091 2290177 host.go:66] Checking if "ha-942409-m03" exists ...
	I0110 02:07:36.446553 2290177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:07:36.446617 2290177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-942409-m03
	I0110 02:07:36.463412 2290177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/ha-942409-m03/id_rsa Username:docker}
	I0110 02:07:36.571021 2290177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:07:36.585294 2290177 kubeconfig.go:125] found "ha-942409" server: "https://192.168.49.254:8443"
	I0110 02:07:36.585324 2290177 api_server.go:166] Checking apiserver status ...
	I0110 02:07:36.585369 2290177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:07:36.598258 2290177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2040/cgroup
	I0110 02:07:36.607495 2290177 api_server.go:192] apiserver freezer: "4:freezer:/docker/68b41c282cf80394d212bafd7c1d163953532e46b02d3afb431a999da68a1654/kubepods/burstable/pod2949ed686637213014c1cc8b8d5837a7/6167883aa04ba1348c1462087479f0299c62b6c18f215acd7defab4a5ec2ac1e"
	I0110 02:07:36.607568 2290177 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/68b41c282cf80394d212bafd7c1d163953532e46b02d3afb431a999da68a1654/kubepods/burstable/pod2949ed686637213014c1cc8b8d5837a7/6167883aa04ba1348c1462087479f0299c62b6c18f215acd7defab4a5ec2ac1e/freezer.state
	I0110 02:07:36.615878 2290177 api_server.go:214] freezer state: "THAWED"
	I0110 02:07:36.615949 2290177 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:07:36.625959 2290177 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:07:36.625991 2290177 status.go:463] ha-942409-m03 apiserver status = Running (err=<nil>)
	I0110 02:07:36.626002 2290177 status.go:176] ha-942409-m03 status: &{Name:ha-942409-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:07:36.626027 2290177 status.go:174] checking status of ha-942409-m04 ...
	I0110 02:07:36.626351 2290177 cli_runner.go:164] Run: docker container inspect ha-942409-m04 --format={{.State.Status}}
	I0110 02:07:36.646049 2290177 status.go:371] ha-942409-m04 host status = "Running" (err=<nil>)
	I0110 02:07:36.646077 2290177 host.go:66] Checking if "ha-942409-m04" exists ...
	I0110 02:07:36.646382 2290177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-942409-m04
	I0110 02:07:36.666790 2290177 host.go:66] Checking if "ha-942409-m04" exists ...
	I0110 02:07:36.667153 2290177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:07:36.667202 2290177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-942409-m04
	I0110 02:07:36.696528 2290177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34783 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/ha-942409-m04/id_rsa Username:docker}
	I0110 02:07:36.802649 2290177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:07:36.822055 2290177 status.go:176] ha-942409-m04 status: &{Name:ha-942409-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node start m02 --alsologtostderr -v 5
E0110 02:07:55.261216 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.266539 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.276901 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.297289 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.337647 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.417940 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.578312 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:55.898626 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:56.539252 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:07:57.819632 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:08:00.381154 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:08:05.501429 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:08:15.741809 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 node start m02 --alsologtostderr -v 5: (43.398010484s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: (1.223071114s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.070599671s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 stop --alsologtostderr -v 5
E0110 02:08:36.222060 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 stop --alsologtostderr -v 5: (35.450628467s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 start --wait true --alsologtostderr -v 5
E0110 02:09:17.184154 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:10:39.104956 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 start --wait true --alsologtostderr -v 5: (2m7.845042049s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (163.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node delete m03 --alsologtostderr -v 5
E0110 02:11:09.095685 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 node delete m03 --alsologtostderr -v 5: (10.889082283s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 stop --alsologtostderr -v 5: (33.571161375s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: exit status 7 (122.585037ms)

                                                
                                                
-- stdout --
	ha-942409
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942409-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-942409-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:11:53.366412 2317782 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:11:53.366554 2317782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:11:53.366566 2317782 out.go:374] Setting ErrFile to fd 2...
	I0110 02:11:53.366572 2317782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:11:53.366834 2317782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:11:53.367051 2317782 out.go:368] Setting JSON to false
	I0110 02:11:53.367080 2317782 mustload.go:66] Loading cluster: ha-942409
	I0110 02:11:53.367492 2317782 config.go:182] Loaded profile config "ha-942409": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:11:53.367517 2317782 status.go:174] checking status of ha-942409 ...
	I0110 02:11:53.368079 2317782 cli_runner.go:164] Run: docker container inspect ha-942409 --format={{.State.Status}}
	I0110 02:11:53.368370 2317782 notify.go:221] Checking for updates...
	I0110 02:11:53.388409 2317782 status.go:371] ha-942409 host status = "Stopped" (err=<nil>)
	I0110 02:11:53.388430 2317782 status.go:384] host is not running, skipping remaining checks
	I0110 02:11:53.388438 2317782 status.go:176] ha-942409 status: &{Name:ha-942409 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:11:53.388468 2317782 status.go:174] checking status of ha-942409-m02 ...
	I0110 02:11:53.388776 2317782 cli_runner.go:164] Run: docker container inspect ha-942409-m02 --format={{.State.Status}}
	I0110 02:11:53.414674 2317782 status.go:371] ha-942409-m02 host status = "Stopped" (err=<nil>)
	I0110 02:11:53.414700 2317782 status.go:384] host is not running, skipping remaining checks
	I0110 02:11:53.414708 2317782 status.go:176] ha-942409-m02 status: &{Name:ha-942409-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:11:53.414726 2317782 status.go:174] checking status of ha-942409-m04 ...
	I0110 02:11:53.415032 2317782 cli_runner.go:164] Run: docker container inspect ha-942409-m04 --format={{.State.Status}}
	I0110 02:11:53.434016 2317782 status.go:371] ha-942409-m04 host status = "Stopped" (err=<nil>)
	I0110 02:11:53.434038 2317782 status.go:384] host is not running, skipping remaining checks
	I0110 02:11:53.434045 2317782 status.go:176] ha-942409-m04 status: &{Name:ha-942409-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0110 02:12:55.258159 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m6.429753401s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: (1.08534133s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (61.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 node add --control-plane --alsologtostderr -v 5
E0110 02:13:22.946711 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 node add --control-plane --alsologtostderr -v 5: (59.90487963s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-942409 status --alsologtostderr -v 5: (1.235074593s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (61.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.138545103s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.14s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-135610 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-135610 --driver=docker  --container-runtime=docker: (28.564819636s)
--- PASS: TestImageBuild/serial/Setup (28.56s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-135610
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-135610: (1.957804287s)
--- PASS: TestImageBuild/serial/NormalBuild (1.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-135610
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-135610: (1.095526173s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-135610
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-135610: (1.255797528s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.26s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-135610
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-512104 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-512104 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m7.472726597s)
--- PASS: TestJSONOutput/start/Command (67.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-512104 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-512104 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.2s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-512104 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-512104 --output=json --user=testUser: (11.199492106s)
--- PASS: TestJSONOutput/stop/Command (11.20s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-392516 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-392516 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.368797ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8e3dd1c-6181-4398-be0d-00b7d7b3fc26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-392516] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"51019aaf-64e5-4940-9fa7-19b9ba452444","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"9531a668-e49b-42d0-aa74-3e13703b5017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"74126d62-6938-43ea-af6e-bf197f92f941","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig"}}
	{"specversion":"1.0","id":"d5e0cfeb-b40d-42e8-9b2e-76b357aae26d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube"}}
	{"specversion":"1.0","id":"09343eeb-67c5-48b0-b3a4-a8b9934bbead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6d540116-cb61-41b0-8caf-1269c571bbd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"44966451-39dc-4670-9030-fbc45ee827c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-392516" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-392516
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-312322 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-312322 --network=: (28.038920598s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-312322" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-312322
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-312322: (2.161345901s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-644763 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-644763 --network=bridge: (29.038758955s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-644763" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-644763
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-644763: (2.06695497s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.13s)

                                                
                                    
x
+
TestKicExistingNetwork (30.31s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 02:17:12.067202 2222877 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 02:17:12.084014 2222877 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 02:17:12.084100 2222877 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 02:17:12.084117 2222877 cli_runner.go:164] Run: docker network inspect existing-network
W0110 02:17:12.100656 2222877 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 02:17:12.100701 2222877 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 02:17:12.100715 2222877 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 02:17:12.100829 2222877 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 02:17:12.118248 2222877 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eeafa1ec40c7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:dd:85:54:7e:14} reservation:<nil>}
I0110 02:17:12.118562 2222877 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4004c37750}
I0110 02:17:12.118593 2222877 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 02:17:12.118644 2222877 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 02:17:12.181227 2222877 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-252466 --network=existing-network
E0110 02:17:32.145945 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-252466 --network=existing-network: (28.077639241s)
helpers_test.go:176: Cleaning up "existing-network-252466" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-252466
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-252466: (2.087896604s)
I0110 02:17:42.362992 2222877 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.31s)

                                                
                                    
x
+
TestKicCustomSubnet (31.02s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-328531 --subnet=192.168.60.0/24
E0110 02:17:55.261225 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-328531 --subnet=192.168.60.0/24: (28.814962337s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-328531 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-328531" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-328531
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-328531: (2.17773387s)
--- PASS: TestKicCustomSubnet (31.02s)

                                                
                                    
x
+
TestKicStaticIP (31.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-811992 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-811992 --static-ip=192.168.200.200: (29.492976914s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-811992 ip
helpers_test.go:176: Cleaning up "static-ip-811992" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-811992
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-811992: (2.332356454s)
--- PASS: TestKicStaticIP (31.98s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (62.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-967692 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-967692 --driver=docker  --container-runtime=docker: (25.837305897s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-970416 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-970416 --driver=docker  --container-runtime=docker: (29.969782226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-967692
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-970416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-970416" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-970416
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-970416: (2.482511849s)
helpers_test.go:176: Cleaning up "first-967692" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-967692
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-967692: (2.278109885s)
--- PASS: TestMinikubeProfile (62.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-558060 --memory=3072 --mount-string /tmp/TestMountStartserial562452357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-558060 --memory=3072 --mount-string /tmp/TestMountStartserial562452357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.974086698s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-558060 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559898 --memory=3072 --mount-string /tmp/TestMountStartserial562452357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559898 --memory=3072 --mount-string /tmp/TestMountStartserial562452357/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.744477835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-558060 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-558060 --alsologtostderr -v=5: (1.585723614s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-559898
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-559898: (1.296454005s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559898
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559898: (7.860790741s)
--- PASS: TestMountStart/serial/RestartStopped (8.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-531044 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0110 02:21:09.096085 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-531044 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.850313693s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-531044 -- rollout status deployment/busybox: (4.173929639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-8wh8z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-zkjbb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-8wh8z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-zkjbb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-8wh8z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-zkjbb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-8wh8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-8wh8z -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-zkjbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-531044 -- exec busybox-769dd8b7dd-zkjbb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-531044 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-531044 -v=5 --alsologtostderr: (31.724611892s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-531044 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp testdata/cp-test.txt multinode-531044:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473026046/001/cp-test_multinode-531044.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044:/home/docker/cp-test.txt multinode-531044-m02:/home/docker/cp-test_multinode-531044_multinode-531044-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test_multinode-531044_multinode-531044-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044:/home/docker/cp-test.txt multinode-531044-m03:/home/docker/cp-test_multinode-531044_multinode-531044-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test_multinode-531044_multinode-531044-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp testdata/cp-test.txt multinode-531044-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473026046/001/cp-test_multinode-531044-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m02:/home/docker/cp-test.txt multinode-531044:/home/docker/cp-test_multinode-531044-m02_multinode-531044.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test_multinode-531044-m02_multinode-531044.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m02:/home/docker/cp-test.txt multinode-531044-m03:/home/docker/cp-test_multinode-531044-m02_multinode-531044-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test_multinode-531044-m02_multinode-531044-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp testdata/cp-test.txt multinode-531044-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2473026046/001/cp-test_multinode-531044-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m03:/home/docker/cp-test.txt multinode-531044:/home/docker/cp-test_multinode-531044-m03_multinode-531044.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044 "sudo cat /home/docker/cp-test_multinode-531044-m03_multinode-531044.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 cp multinode-531044-m03:/home/docker/cp-test.txt multinode-531044-m02:/home/docker/cp-test_multinode-531044-m03_multinode-531044-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 ssh -n multinode-531044-m02 "sudo cat /home/docker/cp-test_multinode-531044-m03_multinode-531044-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-531044 node stop m03: (1.370350643s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-531044 status: exit status 7 (558.770925ms)

                                                
                                                
-- stdout --
	multinode-531044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-531044-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-531044-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr: exit status 7 (557.840301ms)

                                                
                                                
-- stdout --
	multinode-531044
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-531044-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-531044-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:22:38.503990 2390775 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:22:38.504136 2390775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:22:38.504161 2390775 out.go:374] Setting ErrFile to fd 2...
	I0110 02:22:38.504174 2390775 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:22:38.504569 2390775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:22:38.504871 2390775 out.go:368] Setting JSON to false
	I0110 02:22:38.504921 2390775 mustload.go:66] Loading cluster: multinode-531044
	I0110 02:22:38.505822 2390775 config.go:182] Loaded profile config "multinode-531044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:22:38.505856 2390775 status.go:174] checking status of multinode-531044 ...
	I0110 02:22:38.506653 2390775 cli_runner.go:164] Run: docker container inspect multinode-531044 --format={{.State.Status}}
	I0110 02:22:38.507543 2390775 notify.go:221] Checking for updates...
	I0110 02:22:38.526555 2390775 status.go:371] multinode-531044 host status = "Running" (err=<nil>)
	I0110 02:22:38.526580 2390775 host.go:66] Checking if "multinode-531044" exists ...
	I0110 02:22:38.526892 2390775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-531044
	I0110 02:22:38.549448 2390775 host.go:66] Checking if "multinode-531044" exists ...
	I0110 02:22:38.549740 2390775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:22:38.549796 2390775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-531044
	I0110 02:22:38.571566 2390775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34893 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/multinode-531044/id_rsa Username:docker}
	I0110 02:22:38.676455 2390775 ssh_runner.go:195] Run: systemctl --version
	I0110 02:22:38.684554 2390775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:22:38.697592 2390775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:22:38.766232 2390775 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:22:38.755944805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:22:38.766792 2390775 kubeconfig.go:125] found "multinode-531044" server: "https://192.168.67.2:8443"
	I0110 02:22:38.766832 2390775 api_server.go:166] Checking apiserver status ...
	I0110 02:22:38.766885 2390775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:22:38.780912 2390775 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2166/cgroup
	I0110 02:22:38.790501 2390775 api_server.go:192] apiserver freezer: "4:freezer:/docker/4199b687b4491840c34d4505501c6485871abee48147961d245dc5b06a7b43d8/kubepods/burstable/pod9ea185db241fcd5cd3c6300a17bed721/750331ef40899203aa1531c2636fa5a5e1872ea15774b27d7642bf626be948fe"
	I0110 02:22:38.790575 2390775 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4199b687b4491840c34d4505501c6485871abee48147961d245dc5b06a7b43d8/kubepods/burstable/pod9ea185db241fcd5cd3c6300a17bed721/750331ef40899203aa1531c2636fa5a5e1872ea15774b27d7642bf626be948fe/freezer.state
	I0110 02:22:38.798456 2390775 api_server.go:214] freezer state: "THAWED"
	I0110 02:22:38.798488 2390775 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 02:22:38.806724 2390775 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 02:22:38.806756 2390775 status.go:463] multinode-531044 apiserver status = Running (err=<nil>)
	I0110 02:22:38.806768 2390775 status.go:176] multinode-531044 status: &{Name:multinode-531044 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:22:38.806786 2390775 status.go:174] checking status of multinode-531044-m02 ...
	I0110 02:22:38.807103 2390775 cli_runner.go:164] Run: docker container inspect multinode-531044-m02 --format={{.State.Status}}
	I0110 02:22:38.825853 2390775 status.go:371] multinode-531044-m02 host status = "Running" (err=<nil>)
	I0110 02:22:38.825911 2390775 host.go:66] Checking if "multinode-531044-m02" exists ...
	I0110 02:22:38.826225 2390775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-531044-m02
	I0110 02:22:38.844992 2390775 host.go:66] Checking if "multinode-531044-m02" exists ...
	I0110 02:22:38.845440 2390775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:22:38.845489 2390775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-531044-m02
	I0110 02:22:38.863488 2390775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34898 SSHKeyPath:/home/jenkins/minikube-integration/22414-2221005/.minikube/machines/multinode-531044-m02/id_rsa Username:docker}
	I0110 02:22:38.969660 2390775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:22:38.984137 2390775 status.go:176] multinode-531044-m02 status: &{Name:multinode-531044-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:22:38.984175 2390775 status.go:174] checking status of multinode-531044-m03 ...
	I0110 02:22:38.984508 2390775 cli_runner.go:164] Run: docker container inspect multinode-531044-m03 --format={{.State.Status}}
	I0110 02:22:39.002746 2390775 status.go:371] multinode-531044-m03 host status = "Stopped" (err=<nil>)
	I0110 02:22:39.002770 2390775 status.go:384] host is not running, skipping remaining checks
	I0110 02:22:39.002778 2390775 status.go:176] multinode-531044-m03 status: &{Name:multinode-531044-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-531044 node start m03 -v=5 --alsologtostderr: (9.081906061s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-531044
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-531044
E0110 02:22:55.258927 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-531044: (23.236851008s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-531044 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-531044 --wait=true -v=5 --alsologtostderr: (51.124007769s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-531044
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-531044 node delete m03: (5.155526918s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 stop
E0110 02:24:18.306983 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-531044 stop: (21.979719022s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-531044 status: exit status 7 (89.042601ms)

                                                
                                                
-- stdout --
	multinode-531044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-531044-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr: exit status 7 (90.327255ms)

                                                
                                                
-- stdout --
	multinode-531044
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-531044-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:24:31.369461 2404497 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:24:31.369576 2404497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:31.369587 2404497 out.go:374] Setting ErrFile to fd 2...
	I0110 02:24:31.369594 2404497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:31.369862 2404497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:24:31.370060 2404497 out.go:368] Setting JSON to false
	I0110 02:24:31.370092 2404497 mustload.go:66] Loading cluster: multinode-531044
	I0110 02:24:31.370252 2404497 notify.go:221] Checking for updates...
	I0110 02:24:31.370536 2404497 config.go:182] Loaded profile config "multinode-531044": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:24:31.370561 2404497 status.go:174] checking status of multinode-531044 ...
	I0110 02:24:31.371397 2404497 cli_runner.go:164] Run: docker container inspect multinode-531044 --format={{.State.Status}}
	I0110 02:24:31.390011 2404497 status.go:371] multinode-531044 host status = "Stopped" (err=<nil>)
	I0110 02:24:31.390033 2404497 status.go:384] host is not running, skipping remaining checks
	I0110 02:24:31.390039 2404497 status.go:176] multinode-531044 status: &{Name:multinode-531044 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:24:31.390085 2404497 status.go:174] checking status of multinode-531044-m02 ...
	I0110 02:24:31.390427 2404497 cli_runner.go:164] Run: docker container inspect multinode-531044-m02 --format={{.State.Status}}
	I0110 02:24:31.410745 2404497 status.go:371] multinode-531044-m02 host status = "Stopped" (err=<nil>)
	I0110 02:24:31.410774 2404497 status.go:384] host is not running, skipping remaining checks
	I0110 02:24:31.410781 2404497 status.go:176] multinode-531044-m02 status: &{Name:multinode-531044-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-531044 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-531044 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (51.35510255s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-531044 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-531044
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-531044-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-531044-m02 --driver=docker  --container-runtime=docker: exit status 14 (94.139139ms)

                                                
                                                
-- stdout --
	* [multinode-531044-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-531044-m02' is duplicated with machine name 'multinode-531044-m02' in profile 'multinode-531044'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-531044-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-531044-m03 --driver=docker  --container-runtime=docker: (29.362428306s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-531044
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-531044: exit status 80 (337.930439ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-531044 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-531044-m03 already exists in multinode-531044-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-531044-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-531044-m03: (2.232834507s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.08s)

                                                
                                    
x
+
TestScheduledStopUnix (102.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-264951 --memory=3072 --driver=docker  --container-runtime=docker
E0110 02:26:09.096226 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-264951 --memory=3072 --driver=docker  --container-runtime=docker: (29.383175683s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-264951 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:26:29.238407 2418361 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:29.238529 2418361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.238539 2418361 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:29.238545 2418361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.239320 2418361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:26:29.239714 2418361 out.go:368] Setting JSON to false
	I0110 02:26:29.239884 2418361 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:29.240331 2418361 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:26:29.240479 2418361 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/scheduled-stop-264951/config.json ...
	I0110 02:26:29.242115 2418361 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:29.242329 2418361 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-264951 -n scheduled-stop-264951
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-264951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:26:29.712507 2418447 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:29.712715 2418447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.712748 2418447 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:29.712770 2418447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:29.713093 2418447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:26:29.713402 2418447 out.go:368] Setting JSON to false
	I0110 02:26:29.713673 2418447 daemonize_unix.go:73] killing process 2418378 as it is an old scheduled stop
	I0110 02:26:29.713862 2418447 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:29.714268 2418447 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:26:29.714394 2418447 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/scheduled-stop-264951/config.json ...
	I0110 02:26:29.714594 2418447 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:29.714737 2418447 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 02:26:29.723733 2222877 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/scheduled-stop-264951/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-264951 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-264951 -n scheduled-stop-264951
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-264951
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-264951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:26:55.668065 2419171 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:26:55.668190 2419171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:55.668200 2419171 out.go:374] Setting ErrFile to fd 2...
	I0110 02:26:55.668206 2419171 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:26:55.668457 2419171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2221005/.minikube/bin
	I0110 02:26:55.668709 2419171 out.go:368] Setting JSON to false
	I0110 02:26:55.668806 2419171 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:55.669193 2419171 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I0110 02:26:55.669284 2419171 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/scheduled-stop-264951/config.json ...
	I0110 02:26:55.669479 2419171 mustload.go:66] Loading cluster: scheduled-stop-264951
	I0110 02:26:55.669602 2419171 config.go:182] Loaded profile config "scheduled-stop-264951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-264951
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-264951: exit status 7 (75.562205ms)

                                                
                                                
-- stdout --
	scheduled-stop-264951
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-264951 -n scheduled-stop-264951
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-264951 -n scheduled-stop-264951: exit status 7 (72.417591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-264951" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-264951
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-264951: (1.692728467s)
--- PASS: TestScheduledStopUnix (102.71s)

                                                
                                    
x
+
TestSkaffold (137.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3993495938 version
skaffold_test.go:63: skaffold version: v2.17.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-245004 --memory=3072 --driver=docker  --container-runtime=docker
E0110 02:27:55.262408 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-245004 --memory=3072 --driver=docker  --container-runtime=docker: (28.778498853s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3993495938 run --minikube-profile skaffold-245004 --kube-context skaffold-245004 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3993495938 run --minikube-profile skaffold-245004 --kube-context skaffold-245004 --status-check=true --port-forward=false --interactive=false: (1m32.898112058s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-58596947cf-4bg5x" [717456be-7c89-40b8-8000-bc4132b2145f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003378524s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-7f45c957f4-dnlpn" [7703a460-cc7b-46ba-a52c-bc9703943145] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003830232s
helpers_test.go:176: Cleaning up "skaffold-245004" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-245004
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-245004: (3.279923522s)
--- PASS: TestSkaffold (137.68s)

                                                
                                    
x
+
TestInsufficientStorage (13.72s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-412221 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-412221 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (11.331570079s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b27232d-ca0e-45d6-b18f-8817e2bfeb67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-412221] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5584a2d6-5d9e-4788-8f82-0afaa8beeb50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"fc0503a5-e419-4920-b21f-ea33678f9d51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c212c186-0c6d-4c01-9c39-6fbccb132a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig"}}
	{"specversion":"1.0","id":"6eb5f354-36b3-47ef-8c4f-9f8d31a1d270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube"}}
	{"specversion":"1.0","id":"19c78379-d87c-4f92-8e7d-a09468bbe209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2593e511-dc68-4fac-b642-acaf18fc4b80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"68ede0e9-f108-4a51-b591-b8dfa6543c03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"340a785a-9f55-4b7d-820b-54c6dfae781d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9dcde5a1-a946-44d9-8380-49d5e8e4e951","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5bcf33e-ab59-4db7-94ce-e0659d343a65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e7c6d152-684d-4a41-adf6-2ca32ee0956f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-412221\" primary control-plane node in \"insufficient-storage-412221\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9f5efbb-55e2-4b72-b82b-42b82ef85185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f1f71b8-be4a-408c-ac9d-024393672cab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9599ad77-af41-46a8-8f2c-8e306d543ad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-412221 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-412221 --output=json --layout=cluster: exit status 7 (298.770323ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-412221","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-412221","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:30:11.814464 2429774 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-412221" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-412221 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-412221 --output=json --layout=cluster: exit status 7 (314.522277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-412221","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-412221","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:30:12.127926 2429842 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-412221" does not appear in /home/jenkins/minikube-integration/22414-2221005/kubeconfig
	E0110 02:30:12.138773 2429842 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/insufficient-storage-412221/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-412221" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-412221
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-412221: (1.768435492s)
--- PASS: TestInsufficientStorage (13.72s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (331.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3077750188 start -p running-upgrade-312403 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3077750188 start -p running-upgrade-312403 --memory=3072 --vm-driver=docker  --container-runtime=docker: (31.359497926s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-312403 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 02:47:55.258006 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:49:45.884273 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-312403 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m56.39460712s)
helpers_test.go:176: Cleaning up "running-upgrade-312403" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-312403
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-312403: (2.128540246s)
--- PASS: TestRunningBinaryUpgrade (331.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 02:46:09.095748 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.749868824s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-312691 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-312691 --alsologtostderr: (2.148162766s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-312691 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-312691 status --format={{.Host}}: exit status 7 (64.248848ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.000755815s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-312691 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (120.114274ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-312691] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-312691
	    minikube start -p kubernetes-upgrade-312691 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3126912 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-312691 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 02:50:52.146856 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:51:08.929952 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:51:09.096423 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-312691 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.237397856s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-312691" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-312691
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-312691: (2.597054675s)
--- PASS: TestKubernetesUpgrade (344.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (87.11s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2439781315 start -p missing-upgrade-078813 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2439781315 start -p missing-upgrade-078813 --memory=3072 --driver=docker  --container-runtime=docker: (34.524976999s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-078813
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-078813: (1.775546186s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-078813
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-078813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 02:44:45.884536 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-078813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.480777594s)
helpers_test.go:176: Cleaning up "missing-upgrade-078813" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-078813
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-078813: (2.112979548s)
--- PASS: TestMissingContainerUpgrade (87.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (127.971748ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-972590] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2221005/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2221005/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-972590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-972590 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.777656573s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-972590 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (11.533625893s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-972590 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-972590 status -o json: exit status 2 (349.756383ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-972590","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-972590
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-972590: (1.923484916s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0110 02:31:09.096104 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-972590 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (6.614325002s)
--- PASS: TestNoKubernetes/serial/Start (6.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22414-2221005/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-972590 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-972590 "sudo systemctl is-active --quiet service kubelet": exit status 1 (310.002974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-972590
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-972590: (1.321412514s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-972590 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-972590 --driver=docker  --container-runtime=docker: (8.467127621s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-972590 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-972590 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.995548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (342.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2934561335 start -p stopped-upgrade-544443 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2934561335 start -p stopped-upgrade-544443 --memory=3072 --vm-driver=docker  --container-runtime=docker: (1m0.971390319s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2934561335 -p stopped-upgrade-544443 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2934561335 -p stopped-upgrade-544443 stop: (10.887230657s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-544443 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0110 02:42:55.258522 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-544443 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.245895603s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (342.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-544443
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (85.45s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-657790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-657790 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m18.317449549s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-657790 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-657790
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-657790: (6.217282204s)
--- PASS: TestPreload/Start-NoPreload-PullImage (85.45s)

                                                
                                    
x
+
TestPause/serial/Start (78.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-826486 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-826486 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m18.869369087s)
--- PASS: TestPause/serial/Start (78.87s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (56.85s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-657790 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0110 02:52:55.258800 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-657790 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (56.584723142s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-657790 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (56.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (76.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m16.799983198s)
--- PASS: TestNetworkPlugins/group/auto/Start (76.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-826486 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-826486 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.109932896s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.13s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-826486 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-826486 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-826486 --output=json --layout=cluster: exit status 2 (369.690465ms)

                                                
                                                
-- stdout --
	{"Name":"pause-826486","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-826486","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-826486 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-826486 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.37s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-826486 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-826486 --alsologtostderr -v=5: (2.365595563s)
--- PASS: TestPause/serial/DeletePaused (2.37s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-826486
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-826486: exit status 1 (18.856002ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-826486: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0110 02:54:45.885308 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (52.137773367s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-818554 "pgrep -a kubelet"
I0110 02:55:00.813123 2222877 config.go:182] Loaded profile config "auto-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-64mzw" [6007874e-1767-4b77-ab06-adff32ce513f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-64mzw" [6007874e-1767-4b77-ab06-adff32ce513f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003028389s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-dwmdj" [c8e45e3a-c832-440e-884e-2323f109dfe9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004339195s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m14.006807515s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-818554 "pgrep -a kubelet"
I0110 02:55:35.847427 2222877 config.go:182] Loaded profile config "kindnet-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-fcbb7" [ec3d5c6f-2e99-4679-9c8f-60dccde47f88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-fcbb7" [ec3d5c6f-2e99-4679-9c8f-60dccde47f88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005132982s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (51.114203446s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-cckxx" [2d8aa7b0-c954-4711-9b5e-0bc2acd35ee1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-cckxx" [2d8aa7b0-c954-4711-9b5e-0bc2acd35ee1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00947648s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-818554 "pgrep -a kubelet"
I0110 02:56:55.301621 2222877 config.go:182] Loaded profile config "calico-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-818554 replace --force -f testdata/netcat-deployment.yaml
I0110 02:56:55.704446 2222877 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-27pck" [7e7c0614-9810-4d3d-a9f3-d25f7544c7eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-27pck" [7e7c0614-9810-4d3d-a9f3-d25f7544c7eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.002873306s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-818554 "pgrep -a kubelet"
I0110 02:57:05.582873 2222877 config.go:182] Loaded profile config "custom-flannel-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-8qvfb" [dbce83a3-29b9-4e71-9c8a-e45933a4b480] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-8qvfb" [dbce83a3-29b9-4e71-9c8a-e45933a4b480] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006989261s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (74.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0110 02:57:38.309913 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m14.254334241s)
--- PASS: TestNetworkPlugins/group/false/Start (74.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0110 02:57:55.258051 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m13.291781394s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-818554 "pgrep -a kubelet"
I0110 02:58:48.246249 2222877 config.go:182] Loaded profile config "false-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7wgfd" [1e0ff8ab-1925-48dc-87b6-e210fea5437d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7wgfd" [1e0ff8ab-1925-48dc-87b6-e210fea5437d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00314731s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-818554 "pgrep -a kubelet"
I0110 02:58:57.538881 2222877 config.go:182] Loaded profile config "enable-default-cni-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5lgn7" [fd2d3893-b240-431e-8a67-59f6da02f2d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-5lgn7" [fd2d3893-b240-431e-8a67-59f6da02f2d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007152104s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (59.819840903s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0110 02:59:45.885026 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.155651 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.161673 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.172946 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.194556 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.239419 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.324791 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.485867 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:01.816847 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:02.459654 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:03.740475 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:06.301004 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:11.421916 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:21.662126 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m20.596352586s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-jpgj5" [11b4c402-e861-435c-9f25-69f4fbb3ea1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003880423s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-818554 "pgrep -a kubelet"
I0110 03:00:28.756976 2222877 config.go:182] Loaded profile config "flannel-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zsrxq" [26069f9f-f3ba-490a-a3d6-2d00ef09df4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 03:00:29.239123 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.244443 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.254719 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.275011 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.315335 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.395638 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.556075 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:29.876671 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:30.517173 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:00:31.799488 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-zsrxq" [26069f9f-f3ba-490a-a3d6-2d00ef09df4d] Running
E0110 03:00:34.359918 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004877943s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0110 03:00:39.480551 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-818554 "pgrep -a kubelet"
I0110 03:00:54.222481 2222877 config.go:182] Loaded profile config "bridge-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-818554 replace --force -f testdata/netcat-deployment.yaml
I0110 03:00:54.580541 2222877 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-d6tt4" [1947977a-978e-4111-b64c-4ec11da8a03e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-d6tt4" [1947977a-978e-4111-b64c-4ec11da8a03e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004022192s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (78.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-818554 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m18.62313979s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (78.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (90.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-371820 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0110 03:01:48.845567 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:48.850847 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:48.861154 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:48.881424 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:48.921726 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:49.002147 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:49.162720 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:49.483522 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:50.124094 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:51.163137 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:51.404981 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:53.965531 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:01:59.085736 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:05.857249 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:05.862803 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:05.873118 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:05.893725 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:05.934070 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:06.014387 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:06.174857 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:06.495982 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:07.137220 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:08.418130 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:09.326831 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:10.978385 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:02:16.098748 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-371820 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m30.755017898s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (90.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-818554 "pgrep -a kubelet"
I0110 03:02:22.149366 2222877 config.go:182] Loaded profile config "kubenet-818554": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-818554 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-shnkw" [4e9806e8-16c3-4fbf-8525-0ec14b70c9c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 03:02:26.338995 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-shnkw" [4e9806e8-16c3-4fbf-8525-0ec14b70c9c2] Running
E0110 03:02:29.807312 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004082011s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-818554 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-818554 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)
E0110 03:08:22.360619 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-797964 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:02:55.257765 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-797964 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m9.097865071s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-371820 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8b371812-4523-4a7f-8332-2680e13509e2] Pending
helpers_test.go:353: "busybox" [8b371812-4523-4a7f-8332-2680e13509e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8b371812-4523-4a7f-8332-2680e13509e2] Running
E0110 03:03:10.767901 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004064478s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-371820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-371820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-371820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.294981442s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-371820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-371820 --alsologtostderr -v=3
E0110 03:03:13.084235 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-371820 --alsologtostderr -v=3: (11.736164034s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-371820 -n old-k8s-version-371820
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-371820 -n old-k8s-version-371820: exit status 7 (73.071581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-371820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-371820 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0110 03:03:27.780717 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.531520 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.536837 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.547231 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.567567 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.607894 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.688508 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:48.849127 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:49.169694 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:49.810278 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:51.090925 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:53.651961 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.792359 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.797575 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.807789 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.828128 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.868489 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:57.948921 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:58.109337 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:58.429872 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:58.772981 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:03:59.070596 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:04:00.351335 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:04:02.911581 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-371820 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (59.685101868s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-371820 -n old-k8s-version-371820
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-797964 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2c771976-7eb2-4a04-9d0d-fd702563efbe] Pending
helpers_test.go:353: "busybox" [2c771976-7eb2-4a04-9d0d-fd702563efbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2c771976-7eb2-4a04-9d0d-fd702563efbe] Running
E0110 03:04:08.032721 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:04:09.013428 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003466948s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-797964 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-797964 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-797964 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-797964 --alsologtostderr -v=3
E0110 03:04:18.273281 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-797964 --alsologtostderr -v=3: (11.323608s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-79pq8" [d375dd0e-2992-4e62-9408-5043a2a7a6ab] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004003131s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797964 -n embed-certs-797964
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797964 -n embed-certs-797964: exit status 7 (73.509942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-797964 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (30.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-797964 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:04:29.494569 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-797964 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (29.721629513s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-797964 -n embed-certs-797964
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (30.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-79pq8" [d375dd0e-2992-4e62-9408-5043a2a7a6ab] Running
E0110 03:04:32.689146 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004244275s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-371820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-371820 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-371820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-371820 --alsologtostderr -v=1: (1.135764975s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-371820 -n old-k8s-version-371820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-371820 -n old-k8s-version-371820: exit status 2 (483.625761ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-371820 -n old-k8s-version-371820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-371820 -n old-k8s-version-371820: exit status 2 (446.133466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-371820 --alsologtostderr -v=1
E0110 03:04:38.754374 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-371820 -n old-k8s-version-371820
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-371820 -n old-k8s-version-371820
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-288111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:04:45.884601 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:04:49.701951 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-288111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m18.618569607s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qpbk2" [31665859-abcc-4bce-879f-cd3c1ff4aefe] Running
E0110 03:05:01.154284 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003810986s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qpbk2" [31665859-abcc-4bce-879f-cd3c1ff4aefe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005421744s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-797964 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-797964 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-797964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797964 -n embed-certs-797964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797964 -n embed-certs-797964: exit status 2 (385.323518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797964 -n embed-certs-797964
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797964 -n embed-certs-797964: exit status 2 (456.573493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-797964 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-797964 --alsologtostderr -v=1: (1.034844828s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-797964 -n embed-certs-797964
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-797964 -n embed-certs-797964
E0110 03:05:10.455559 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-372904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:05:19.716322 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.443565 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.448730 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.458991 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.479356 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.519651 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.600022 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:22.760514 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:23.080684 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:23.720923 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:25.001137 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:27.561981 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:28.864675 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/auto-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:29.239205 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:32.682609 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:42.923376 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.546291 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.551652 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.561941 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.582248 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.622539 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.702943 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:54.863430 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:55.184016 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:55.824451 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:56.924881 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kindnet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:57.105234 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:05:59.666310 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-372904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m12.35399879s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-288111 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d71081ed-b2d4-4978-a0c7-636290277272] Pending
E0110 03:06:03.403623 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [d71081ed-b2d4-4978-a0c7-636290277272] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0110 03:06:04.787099 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [d71081ed-b2d4-4978-a0c7-636290277272] Running
E0110 03:06:09.096197 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003863652s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-288111 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-288111 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-288111 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-288111 --alsologtostderr -v=3
E0110 03:06:15.027346 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-288111 --alsologtostderr -v=3: (11.330819317s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288111 -n no-preload-288111
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288111 -n no-preload-288111: exit status 7 (93.344312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-288111 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-288111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-288111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (51.556433403s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-288111 -n no-preload-288111
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-372904 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [696eab28-ab26-4468-8f1b-07216e1092c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [696eab28-ab26-4468-8f1b-07216e1092c7] Running
E0110 03:06:32.376561 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/false-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:06:35.508233 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003143303s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-372904 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-372904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-372904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.208316268s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-372904 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-372904 --alsologtostderr -v=3
E0110 03:06:41.637038 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/enable-default-cni-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:06:44.364769 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:06:48.844881 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-372904 --alsologtostderr -v=3: (11.973268303s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904: exit status 7 (93.195846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-372904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-372904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:07:05.857723 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/custom-flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:16.468982 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:16.530298 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/calico-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-372904 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (50.229853477s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vr7s4" [60d784ad-f768-4ec2-8171-623ff64620f1] Running
E0110 03:07:22.448930 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.454291 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.464688 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.485109 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.525477 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.605852 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:22.766190 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:23.087027 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:23.728080 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003797247s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-vr7s4" [60d784ad-f768-4ec2-8171-623ff64620f1] Running
E0110 03:07:25.009201 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:07:27.569770 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00578359s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-288111 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-288111 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-288111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288111 -n no-preload-288111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288111 -n no-preload-288111: exit status 2 (357.68736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288111 -n no-preload-288111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288111 -n no-preload-288111: exit status 2 (385.707854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-288111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-288111 -n no-preload-288111
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-288111 -n no-preload-288111
E0110 03:07:32.147385 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/addons-991766/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-875447 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-875447 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (37.4257636s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8n4dc" [6a7c73f4-2de0-4741-be8a-a85ab12316df] Running
E0110 03:07:42.931225 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003260764s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8n4dc" [6a7c73f4-2de0-4741-be8a-a85ab12316df] Running
E0110 03:07:48.930578 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/skaffold-245004/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004835242s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-372904 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-372904 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-372904 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904: exit status 2 (419.016695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904: exit status 2 (424.571748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-372904 --alsologtostderr -v=1
E0110 03:07:55.257997 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/functional-394803/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-372904 -n default-k8s-diff-port-372904
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.81s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (5.98s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-750945 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0110 03:08:01.881175 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:01.886459 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:01.896721 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:01.916948 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:01.957210 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:02.037482 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:02.197899 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:02.518085 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:03.158423 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:03.411772 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:04.438565 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:06.285657 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/flannel-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-750945 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (5.757545652s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-750945" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-750945
--- PASS: TestPreload/PreloadSrc/gcs (5.98s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (9.76s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-958449 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E0110 03:08:06.999655 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 03:08:12.120296 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-958449 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (9.481975523s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-958449" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-958449
--- PASS: TestPreload/PreloadSrc/github (9.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-875447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-875447 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.774349486s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-875447 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-875447 --alsologtostderr -v=3: (11.373942091s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.37s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.5s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-503817 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-503817" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-503817
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875447 -n newest-cni-875447
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875447 -n newest-cni-875447: exit status 7 (74.046331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-875447 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-875447 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E0110 03:08:38.389968 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/bridge-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-875447 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (15.744081608s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875447 -n newest-cni-875447
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-875447 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-875447 --alsologtostderr -v=1
E0110 03:08:42.846154 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/old-k8s-version-371820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875447 -n newest-cni-875447
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875447 -n newest-cni-875447: exit status 2 (346.470871ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875447 -n newest-cni-875447
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875447 -n newest-cni-875447: exit status 2 (322.780107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-875447 --alsologtostderr -v=1
E0110 03:08:44.372388 2222877 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/kubenet-818554/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875447 -n newest-cni-875447
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875447 -n newest-cni-875447
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-469095 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-469095" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-469095
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-818554 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-818554" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22414-2221005/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:30:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-420658
contexts:
- context:
cluster: offline-docker-420658
extensions:
- extension:
last-update: Sat, 10 Jan 2026 02:30:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-docker-420658
name: offline-docker-420658
current-context: offline-docker-420658
kind: Config
preferences: {}
users:
- name: offline-docker-420658
user:
client-certificate: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/offline-docker-420658/client.crt
client-key: /home/jenkins/minikube-integration/22414-2221005/.minikube/profiles/offline-docker-420658/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-818554

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-818554" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-818554"

                                                
                                                
----------------------- debugLogs end: cilium-818554 [took: 4.063009907s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-818554" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-818554
--- SKIP: TestNetworkPlugins/group/cilium (4.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-358151" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-358151
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard