Test Report: Docker_Linux_docker_arm64 22352

                    
                      9a7985111956b2877773a073c576921d0f069a2d:2025-12-28:43023
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 507.14
53 TestForceSystemdEnv 507.3
x
+
TestForceSystemdFlag (507.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1228 07:06:45.222391    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:08:30.709583    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.082531    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.087956    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.098317    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.118679    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.158992    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.239525    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.400033    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:15.720678    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:16.361538    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:17.641827    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:20.202067    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:25.322413    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:35.563204    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:09:56.043490    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:10:27.660473    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:10:37.004361    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:11:45.223721    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:11:58.924666    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:14:15.082663    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m22.801297488s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-649810] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-649810" primary control-plane node in "force-systemd-flag-649810" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:05:59.908381  226337 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:59.908505  226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:59.908511  226337 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:59.908515  226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:59.908870  226337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:05:59.909316  226337 out.go:368] Setting JSON to false
	I1228 07:05:59.911797  226337 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2909,"bootTime":1766902651,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 07:05:59.911876  226337 start.go:143] virtualization:  
	I1228 07:05:59.916290  226337 out.go:179] * [force-systemd-flag-649810] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:05:59.920371  226337 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:59.920602  226337 notify.go:221] Checking for updates...
	I1228 07:05:59.930233  226337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:59.933365  226337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 07:05:59.936750  226337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 07:05:59.939782  226337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:05:59.943014  226337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:05:59.946329  226337 config.go:182] Loaded profile config "force-systemd-env-475689": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:05:59.946460  226337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:59.989184  226337 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:05:59.989298  226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:06:00.147313  226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.132170795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:06:00.147445  226337 docker.go:319] overlay module found
	I1228 07:06:00.151222  226337 out.go:179] * Using the docker driver based on user configuration
	I1228 07:06:00.154382  226337 start.go:309] selected driver: docker
	I1228 07:06:00.154405  226337 start.go:928] validating driver "docker" against <nil>
	I1228 07:06:00.154420  226337 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:06:00.155281  226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:06:00.370917  226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.355782962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:06:00.371072  226337 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:06:00.371298  226337 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:06:00.374992  226337 out.go:179] * Using Docker driver with root privileges
	I1228 07:06:00.377986  226337 cni.go:84] Creating CNI manager for ""
	I1228 07:06:00.378068  226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:00.378082  226337 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:06:00.378160  226337 start.go:353] cluster config:
	{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:00.381388  226337 out.go:179] * Starting "force-systemd-flag-649810" primary control-plane node in "force-systemd-flag-649810" cluster
	I1228 07:06:00.384286  226337 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:06:00.387356  226337 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:06:00.390388  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:00.390453  226337 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1228 07:06:00.390466  226337 cache.go:65] Caching tarball of preloaded images
	I1228 07:06:00.390569  226337 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:06:00.390579  226337 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:06:00.390710  226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
	I1228 07:06:00.390748  226337 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:06:00.390743  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json: {Name:mkcc4924bc7430bc738783d3bc1ceb8a9cf9dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:00.418489  226337 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:06:00.418520  226337 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:06:00.418536  226337 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:06:00.418570  226337 start.go:360] acquireMachinesLock for force-systemd-flag-649810: {Name:mka57d38f56a82b4b8389b88f726a058fa795922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:06:00.418691  226337 start.go:364] duration metric: took 104.256µs to acquireMachinesLock for "force-systemd-flag-649810"
	I1228 07:06:00.418719  226337 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:06:00.418813  226337 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:06:00.426510  226337 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:06:00.426912  226337 start.go:159] libmachine.API.Create for "force-systemd-flag-649810" (driver="docker")
	I1228 07:06:00.426990  226337 client.go:173] LocalClient.Create starting
	I1228 07:06:00.427147  226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem
	I1228 07:06:00.427225  226337 main.go:144] libmachine: Decoding PEM data...
	I1228 07:06:00.427273  226337 main.go:144] libmachine: Parsing certificate...
	I1228 07:06:00.427370  226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem
	I1228 07:06:00.427427  226337 main.go:144] libmachine: Decoding PEM data...
	I1228 07:06:00.427455  226337 main.go:144] libmachine: Parsing certificate...
	I1228 07:06:00.428427  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:06:00.447890  226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:06:00.447979  226337 network_create.go:284] running [docker network inspect force-systemd-flag-649810] to gather additional debugging logs...
	I1228 07:06:00.448135  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810
	W1228 07:06:00.469959  226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 returned with exit code 1
	I1228 07:06:00.469990  226337 network_create.go:287] error running [docker network inspect force-systemd-flag-649810]: docker network inspect force-systemd-flag-649810: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-649810 not found
	I1228 07:06:00.470003  226337 network_create.go:289] output of [docker network inspect force-systemd-flag-649810]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-649810 not found
	
	** /stderr **
	I1228 07:06:00.470126  226337 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:00.492500  226337 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e663f46973f0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e5:53:aa:f4:ad} reservation:<nil>}
	I1228 07:06:00.492943  226337 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad53498571c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:ea:8c:9a:c6:5d} reservation:<nil>}
	I1228 07:06:00.493252  226337 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b73d9f306bb6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:7e:31:bd:ea:20} reservation:<nil>}
	I1228 07:06:00.493666  226337 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197fcd0}
	I1228 07:06:00.493683  226337 network_create.go:124] attempt to create docker network force-systemd-flag-649810 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1228 07:06:00.493748  226337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-649810 force-systemd-flag-649810
	I1228 07:06:00.576344  226337 network_create.go:108] docker network force-systemd-flag-649810 192.168.76.0/24 created
	I1228 07:06:00.576376  226337 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-649810" container
	I1228 07:06:00.576446  226337 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:06:00.593986  226337 cli_runner.go:164] Run: docker volume create force-systemd-flag-649810 --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:06:00.616436  226337 oci.go:103] Successfully created a docker volume force-systemd-flag-649810
	I1228 07:06:00.616534  226337 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-649810-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --entrypoint /usr/bin/test -v force-systemd-flag-649810:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:06:01.192754  226337 oci.go:107] Successfully prepared a docker volume force-systemd-flag-649810
	I1228 07:06:01.192824  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:01.192841  226337 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:06:01.192909  226337 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:06:04.534678  226337 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.341730036s)
	I1228 07:06:04.534706  226337 kic.go:203] duration metric: took 3.341862567s to extract preloaded images to volume ...
	W1228 07:06:04.534846  226337 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:06:04.534950  226337 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:06:04.616424  226337 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-649810 --name force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-649810 --network force-systemd-flag-649810 --ip 192.168.76.2 --volume force-systemd-flag-649810:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:06:05.050915  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Running}}
	I1228 07:06:05.083775  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.121760  226337 cli_runner.go:164] Run: docker exec force-systemd-flag-649810 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:06:05.195792  226337 oci.go:144] the created container "force-systemd-flag-649810" has a running status.
	I1228 07:06:05.195838  226337 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa...
	I1228 07:06:05.653918  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:06:05.653967  226337 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:06:05.681329  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.716468  226337 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:06:05.716494  226337 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-649810 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:06:05.794934  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.820643  226337 machine.go:94] provisionDockerMachine start ...
	I1228 07:06:05.820722  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:05.844435  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:05.845616  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:05.845648  226337 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:06:05.846196  226337 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38420->127.0.0.1:32999: read: connection reset by peer
	I1228 07:06:08.999828  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
	
	I1228 07:06:08.999858  226337 ubuntu.go:182] provisioning hostname "force-systemd-flag-649810"
	I1228 07:06:08.999919  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.037969  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:09.038372  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:09.038392  226337 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-649810 && echo "force-systemd-flag-649810" | sudo tee /etc/hostname
	I1228 07:06:09.203095  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
	
	I1228 07:06:09.203197  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.226561  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:09.226886  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:09.226912  226337 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-649810' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-649810/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-649810' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:06:09.376692  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:06:09.376727  226337 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2382/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2382/.minikube}
	I1228 07:06:09.376753  226337 ubuntu.go:190] setting up certificates
	I1228 07:06:09.376763  226337 provision.go:84] configureAuth start
	I1228 07:06:09.376841  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:09.402227  226337 provision.go:143] copyHostCerts
	I1228 07:06:09.402278  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:09.402318  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem, removing ...
	I1228 07:06:09.402325  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:09.402409  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem (1082 bytes)
	I1228 07:06:09.402515  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:09.402540  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem, removing ...
	I1228 07:06:09.402545  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:09.402581  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem (1123 bytes)
	I1228 07:06:09.402643  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:09.402664  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem, removing ...
	I1228 07:06:09.402677  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:09.402711  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem (1675 bytes)
	I1228 07:06:09.402788  226337 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-649810 san=[127.0.0.1 192.168.76.2 force-systemd-flag-649810 localhost minikube]
	I1228 07:06:09.752728  226337 provision.go:177] copyRemoteCerts
	I1228 07:06:09.752940  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:06:09.753068  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.785834  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:09.921106  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:06:09.921170  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:06:09.942067  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:06:09.942130  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:06:09.962397  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:06:09.962470  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:06:09.983245  226337 provision.go:87] duration metric: took 606.461413ms to configureAuth
	I1228 07:06:09.983284  226337 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:06:09.983486  226337 config.go:182] Loaded profile config "force-systemd-flag-649810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:06:09.983556  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.018234  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.018571  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.018580  226337 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:06:10.171532  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:06:10.171556  226337 ubuntu.go:71] root file system type: overlay
	I1228 07:06:10.171677  226337 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:06:10.171764  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.196947  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.197266  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.197352  226337 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:06:10.359758  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:06:10.359847  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.387300  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.387769  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.387790  226337 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:06:11.609721  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:06:10.353229981 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:06:11.609760  226337 machine.go:97] duration metric: took 5.78909378s to provisionDockerMachine
	I1228 07:06:11.609782  226337 client.go:176] duration metric: took 11.182753053s to LocalClient.Create
	I1228 07:06:11.609802  226337 start.go:167] duration metric: took 11.182887652s to libmachine.API.Create "force-systemd-flag-649810"
	I1228 07:06:11.609811  226337 start.go:293] postStartSetup for "force-systemd-flag-649810" (driver="docker")
	I1228 07:06:11.609821  226337 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:06:11.609893  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:06:11.609934  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.637109  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:11.737067  226337 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:06:11.740612  226337 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:06:11.740643  226337 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:06:11.740655  226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/addons for local assets ...
	I1228 07:06:11.740714  226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/files for local assets ...
	I1228 07:06:11.740797  226337 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> 42022.pem in /etc/ssl/certs
	I1228 07:06:11.740808  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /etc/ssl/certs/42022.pem
	I1228 07:06:11.740908  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:06:11.750293  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:11.773152  226337 start.go:296] duration metric: took 163.328024ms for postStartSetup
	I1228 07:06:11.773520  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:11.810119  226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
	I1228 07:06:11.810475  226337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:06:11.810541  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.832046  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:11.938437  226337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:06:11.944396  226337 start.go:128] duration metric: took 11.525566626s to createHost
	I1228 07:06:11.944421  226337 start.go:83] releasing machines lock for "force-systemd-flag-649810", held for 11.52572031s
	I1228 07:06:11.944491  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:11.969414  226337 ssh_runner.go:195] Run: cat /version.json
	I1228 07:06:11.969478  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.969777  226337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:06:11.969830  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:12.000799  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:12.017933  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:12.132244  226337 ssh_runner.go:195] Run: systemctl --version
	I1228 07:06:12.231712  226337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:06:12.236758  226337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:06:12.236869  226337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:06:12.267687  226337 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:06:12.267755  226337 start.go:496] detecting cgroup driver to use...
	I1228 07:06:12.267782  226337 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:12.267953  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:12.283188  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:06:12.293095  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:06:12.304428  226337 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:06:12.304533  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:06:12.313854  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:12.323205  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:06:12.332643  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:12.341934  226337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:06:12.350791  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:06:12.360609  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:06:12.369833  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:06:12.379802  226337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:06:12.388095  226337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:06:12.396058  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:12.536042  226337 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:06:12.674161  226337 start.go:496] detecting cgroup driver to use...
	I1228 07:06:12.674237  226337 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:12.674325  226337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:06:12.699050  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:12.712858  226337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:06:12.751092  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:12.769844  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:06:12.792531  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:12.815446  226337 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:06:12.819518  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:06:12.829311  226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:06:12.845032  226337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:06:12.989013  226337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:06:13.140533  226337 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:06:13.140637  226337 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:06:13.157163  226337 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:06:13.171809  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:13.314373  226337 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:06:13.798806  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:06:13.813917  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:06:13.829978  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:13.845472  226337 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:06:13.990757  226337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:06:14.139338  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:14.291076  226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:06:14.316287  226337 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:06:14.331768  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:14.475844  226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:06:14.562973  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:14.582946  226337 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:06:14.583063  226337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:06:14.587245  226337 start.go:574] Will wait 60s for crictl version
	I1228 07:06:14.587308  226337 ssh_runner.go:195] Run: which crictl
	I1228 07:06:14.591035  226337 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:06:14.619000  226337 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:06:14.619117  226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:14.654802  226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:14.681742  226337 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:06:14.681898  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:14.699922  226337 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:06:14.704031  226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:14.718757  226337 kubeadm.go:884] updating cluster {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:06:14.718869  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:14.718923  226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:14.738067  226337 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:14.738094  226337 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:06:14.738159  226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:14.765792  226337 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:14.765815  226337 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:06:14.765825  226337 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1228 07:06:14.765924  226337 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-649810 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:06:14.766001  226337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:06:14.828478  226337 cni.go:84] Creating CNI manager for ""
	I1228 07:06:14.828557  226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:14.828591  226337 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:06:14.828637  226337 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-649810 NodeName:force-systemd-flag-649810 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:06:14.828791  226337 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-649810"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:06:14.828879  226337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:06:14.837993  226337 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:06:14.838058  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:06:14.846918  226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1228 07:06:14.862481  226337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:06:14.877609  226337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1228 07:06:14.893026  226337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:06:14.897112  226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:14.908147  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:15.112938  226337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:06:15.149385  226337 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810 for IP: 192.168.76.2
	I1228 07:06:15.149408  226337 certs.go:195] generating shared ca certs ...
	I1228 07:06:15.149425  226337 certs.go:227] acquiring lock for ca certs: {Name:mkb08779780dcf6b96f2c93a4ec9c28968a3dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.149572  226337 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key
	I1228 07:06:15.149628  226337 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key
	I1228 07:06:15.149636  226337 certs.go:257] generating profile certs ...
	I1228 07:06:15.149691  226337 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key
	I1228 07:06:15.149702  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt with IP's: []
	I1228 07:06:15.327648  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt ...
	I1228 07:06:15.327721  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt: {Name:mkf75acb8f7153fe0d0255b564acb6149af2fb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.327938  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key ...
	I1228 07:06:15.327982  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key: {Name:mk51f561ed38ca116434114e1f62874070b9255b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.328119  226337 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1
	I1228 07:06:15.328164  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1228 07:06:15.764980  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 ...
	I1228 07:06:15.765013  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1: {Name:mk91fced1432c5d7a2938e5f8f1f25ea86d8f5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.765212  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 ...
	I1228 07:06:15.765227  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1: {Name:mk1a153a4b0a803bdf2ccf3b1ffb3b75a611c21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.765314  226337 certs.go:382] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt
	I1228 07:06:15.765393  226337 certs.go:386] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key
	I1228 07:06:15.765455  226337 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key
	I1228 07:06:15.765467  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt with IP's: []
	I1228 07:06:16.054118  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt ...
	I1228 07:06:16.054154  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt: {Name:mk48c2c2ab804522bc505c3ba557fdae87d36100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:16.054331  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key ...
	I1228 07:06:16.054347  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key: {Name:mk69a54a58808c1b19f454fc1eed5065bebd15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:16.054418  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:06:16.054445  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:06:16.054466  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:06:16.054482  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:06:16.054500  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:06:16.054517  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:06:16.054529  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:06:16.054543  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:06:16.054593  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem (1338 bytes)
	W1228 07:06:16.054636  226337 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202_empty.pem, impossibly tiny 0 bytes
	I1228 07:06:16.054649  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem (1679 bytes)
	I1228 07:06:16.054677  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:06:16.054705  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:06:16.054746  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem (1675 bytes)
	I1228 07:06:16.054797  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:16.054833  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.054850  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem -> /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.054861  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.055446  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:06:16.078627  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1228 07:06:16.098321  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:06:16.116670  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:06:16.134710  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:06:16.152509  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:06:16.170879  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:06:16.188865  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:06:16.206838  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:06:16.226258  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem --> /usr/share/ca-certificates/4202.pem (1338 bytes)
	I1228 07:06:16.245780  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /usr/share/ca-certificates/42022.pem (1708 bytes)
	I1228 07:06:16.268659  226337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:06:16.283601  226337 ssh_runner.go:195] Run: openssl version
	I1228 07:06:16.290585  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.299195  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:06:16.307118  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.310841  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.310916  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.352064  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:06:16.359487  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:06:16.366859  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.374261  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4202.pem /etc/ssl/certs/4202.pem
	I1228 07:06:16.381698  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.385388  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.385461  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.426915  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:06:16.434366  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4202.pem /etc/ssl/certs/51391683.0
	I1228 07:06:16.441642  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.449184  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42022.pem /etc/ssl/certs/42022.pem
	I1228 07:06:16.456957  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.460669  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.460736  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.501782  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:16.509722  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42022.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:16.517199  226337 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:06:16.520925  226337 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:06:16.520999  226337 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:16.521142  226337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:06:16.537329  226337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:06:16.545115  226337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:06:16.552764  226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:06:16.552877  226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:06:16.560792  226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:06:16.560864  226337 kubeadm.go:158] found existing configuration files:
	
	I1228 07:06:16.560941  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:06:16.568352  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:06:16.568441  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:06:16.575993  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:06:16.583437  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:06:16.583546  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:06:16.590681  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:06:16.598093  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:06:16.598202  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:06:16.605574  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:06:16.613280  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:06:16.613396  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:06:16.620636  226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:06:16.661468  226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:06:16.661712  226337 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:06:16.758679  226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:06:16.758779  226337 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:06:16.758849  226337 kubeadm.go:319] OS: Linux
	I1228 07:06:16.758929  226337 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:06:16.759009  226337 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:06:16.759089  226337 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:06:16.759163  226337 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:06:16.759245  226337 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:06:16.759325  226337 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:06:16.759389  226337 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:06:16.759482  226337 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:06:16.759553  226337 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:06:16.835436  226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:06:16.835601  226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:06:16.835720  226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:06:16.852604  226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:06:16.858977  226337 out.go:252]   - Generating certificates and keys ...
	I1228 07:06:16.859071  226337 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:06:16.859148  226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:06:16.922161  226337 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:06:17.011768  226337 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:06:17.090969  226337 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:06:17.253680  226337 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:06:17.439963  226337 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:06:17.440300  226337 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:06:17.731890  226337 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:06:17.732248  226337 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:06:18.395961  226337 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:06:18.651951  226337 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:06:18.929995  226337 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:06:18.930273  226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:06:19.098124  226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:06:19.475849  226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:06:19.685709  226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:06:20.030601  226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:06:20.108979  226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:06:20.109747  226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:06:20.112581  226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:06:20.115925  226337 out.go:252]   - Booting up control plane ...
	I1228 07:06:20.116029  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:06:20.116107  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:06:20.116173  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:06:20.131690  226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:06:20.131807  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:06:20.142201  226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:06:20.142570  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:06:20.142817  226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:06:20.280684  226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:06:20.280818  226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:10:20.275741  226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182866s
	I1228 07:10:20.275772  226337 kubeadm.go:319] 
	I1228 07:10:20.275828  226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:10:20.275861  226337 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:10:20.275960  226337 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:10:20.275966  226337 kubeadm.go:319] 
	I1228 07:10:20.276064  226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:10:20.276095  226337 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:10:20.276124  226337 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:10:20.276128  226337 kubeadm.go:319] 
	I1228 07:10:20.279666  226337 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:20.280132  226337 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:20.280283  226337 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:10:20.280566  226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:10:20.280582  226337 kubeadm.go:319] 
	I1228 07:10:20.280656  226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:10:20.280798  226337 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182866s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182866s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:10:20.280887  226337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:10:20.708651  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:10:20.722039  226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:10:20.722109  226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:10:20.730359  226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:10:20.730423  226337 kubeadm.go:158] found existing configuration files:
	
	I1228 07:10:20.730491  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:10:20.738525  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:10:20.738593  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:10:20.746327  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:10:20.754111  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:10:20.754179  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:10:20.761709  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:10:20.769442  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:10:20.769505  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:10:20.777179  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:10:20.785378  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:10:20.785469  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:10:20.793339  226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:10:20.906011  226337 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:20.906414  226337 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:20.974641  226337 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:14:22.104356  226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:14:22.104394  226337 kubeadm.go:319] 
	I1228 07:14:22.104466  226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:14:22.105084  226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:14:22.105135  226337 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:14:22.105225  226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:14:22.105279  226337 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:14:22.105313  226337 kubeadm.go:319] OS: Linux
	I1228 07:14:22.105359  226337 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:14:22.105408  226337 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:14:22.105455  226337 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:14:22.105503  226337 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:14:22.105551  226337 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:14:22.105600  226337 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:14:22.105645  226337 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:14:22.105693  226337 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:14:22.105739  226337 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:14:22.105812  226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:14:22.105907  226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:14:22.105996  226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:14:22.106058  226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:14:22.109591  226337 out.go:252]   - Generating certificates and keys ...
	I1228 07:14:22.109685  226337 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:14:22.109750  226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:14:22.109825  226337 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:14:22.109891  226337 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:14:22.109960  226337 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:14:22.110013  226337 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:14:22.110076  226337 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:14:22.110138  226337 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:14:22.110212  226337 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:14:22.110285  226337 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:14:22.110323  226337 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:14:22.110393  226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:14:22.110444  226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:14:22.110501  226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:14:22.110554  226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:14:22.110617  226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:14:22.110671  226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:14:22.110755  226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:14:22.110820  226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:14:22.113628  226337 out.go:252]   - Booting up control plane ...
	I1228 07:14:22.113810  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:14:22.113959  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:14:22.114042  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:14:22.114156  226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:14:22.114258  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:14:22.114370  226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:14:22.114461  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:14:22.114503  226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:14:22.114643  226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:14:22.114755  226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:14:22.114825  226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000416054s
	I1228 07:14:22.114829  226337 kubeadm.go:319] 
	I1228 07:14:22.114889  226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:14:22.114944  226337 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:14:22.115058  226337 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:14:22.115063  226337 kubeadm.go:319] 
	I1228 07:14:22.115176  226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:14:22.115211  226337 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:14:22.115243  226337 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:14:22.115303  226337 kubeadm.go:403] duration metric: took 8m5.594328388s to StartCluster
	I1228 07:14:22.115380  226337 ssh_runner.go:195] Run: sudo runc list -f json
	I1228 07:14:22.115458  226337 kubeadm.go:319] 
	E1228 07:14:22.129689  226337 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.129812  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.144714  226337 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.144779  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.157963  226337 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.158032  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.178330  226337 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.178399  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.207266  226337 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.207333  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.238212  226337 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.238280  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.254139  226337 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.254169  226337 logs.go:123] Gathering logs for kubelet ...
	I1228 07:14:22.254181  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:14:22.337714  226337 logs.go:123] Gathering logs for dmesg ...
	I1228 07:14:22.337754  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:14:22.358014  226337 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:14:22.358048  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:14:22.449954  226337 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:22.438828    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.440075    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.441129    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443104    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443902    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:14:22.438828    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.440075    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.441129    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443104    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443902    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:14:22.449975  226337 logs.go:123] Gathering logs for Docker ...
	I1228 07:14:22.449988  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:14:22.475409  226337 logs.go:123] Gathering logs for container status ...
	I1228 07:14:22.475444  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:14:22.540684  226337 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:14:22.540729  226337 out.go:285] * 
	* 
	W1228 07:14:22.540781  226337 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:22.540794  226337 out.go:285] * 
	* 
	W1228 07:14:22.541043  226337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:14:22.548492  226337 out.go:203] 
	W1228 07:14:22.550664  226337 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:22.550723  226337 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:14:22.550747  226337 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:14:22.553893  226337 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-649810 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-28 07:14:23.139509994 +0000 UTC m=+2785.116132004
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-649810
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-649810:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b",
	        "Created": "2025-12-28T07:06:04.639169024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:06:04.727903456Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/hostname",
	        "HostsPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/hosts",
	        "LogPath": "/var/lib/docker/containers/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b/22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b-json.log",
	        "Name": "/force-systemd-flag-649810",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-649810:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-649810",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22a55950ba9644dfe9b802912dd83735736caf4f67e2e9a507a099c39a77904b",
	                "LowerDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31-init/diff:/var/lib/docker/overlay2/ecb99d95c7e8ff1804547a73cc82a9ed1888766e4e833c4a7b53fdf298df8f33/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e82f0c4c464dee65a8dd92e066e4480cb81062e10d0f194386014328f948ca31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-649810",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-649810/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-649810",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-649810",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-649810",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c82fcd188bdccad83897d9dad70a1e8b0384eaa2e7e48e61d87f2b1735f3825e",
	            "SandboxKey": "/var/run/docker/netns/c82fcd188bdc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32999"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33000"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33003"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33001"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33002"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-649810": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:93:92:89:1d:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5690f8910d0a1179b8b5010fa140b6129798301441766f5c0c359c3fe684e086",
	                    "EndpointID": "10f2940db2d73e005211731a2d4ce5981fafdb9236e6a598dbb03954bbef38ac",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-649810",
	                        "22a55950ba96"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-649810 -n force-systemd-flag-649810
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-649810 -n force-systemd-flag-649810: exit status 6 (378.341303ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:14:23.532512  239159 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-649810" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-649810 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-436830 sudo systemctl cat docker --no-pager                                                                                                                                                                                               │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/docker/daemon.json                                                                                                                                                                                                   │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo docker system info                                                                                                                                                                                                            │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo containerd config dump                                                                                                                                                                                                        │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo crio config                                                                                                                                                                                                                   │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p cilium-436830                                                                                                                                                                                                                                    │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                                        │ force-systemd-env-475689  │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p offline-docker-575789                                                                                                                                                                                                                            │ offline-docker-575789     │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                       │ force-systemd-flag-649810 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ force-systemd-env-475689 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                 │ force-systemd-env-475689  │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ delete  │ -p force-systemd-env-475689                                                                                                                                                                                                                         │ force-systemd-env-475689  │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	│ start   │ -p docker-flags-974112 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ docker-flags-974112       │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │                     │
	│ ssh     │ force-systemd-flag-649810 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                │ force-systemd-flag-649810 │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:14:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:14:21.473844  238674 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:14:21.474037  238674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:14:21.474052  238674 out.go:374] Setting ErrFile to fd 2...
	I1228 07:14:21.474058  238674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:14:21.474431  238674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:14:21.474966  238674 out.go:368] Setting JSON to false
	I1228 07:14:21.475861  238674 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3411,"bootTime":1766902651,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 07:14:21.476004  238674 start.go:143] virtualization:  
	I1228 07:14:21.479677  238674 out.go:179] * [docker-flags-974112] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:14:21.484111  238674 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:14:21.484173  238674 notify.go:221] Checking for updates...
	I1228 07:14:21.490671  238674 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:14:21.493884  238674 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 07:14:21.497954  238674 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 07:14:21.501067  238674 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:14:21.504148  238674 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:14:21.507748  238674 config.go:182] Loaded profile config "force-systemd-flag-649810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:14:21.507881  238674 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:14:21.536812  238674 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:14:21.536933  238674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:14:21.616406  238674 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:14:21.60654715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:14:21.616509  238674 docker.go:319] overlay module found
	I1228 07:14:21.619807  238674 out.go:179] * Using the docker driver based on user configuration
	I1228 07:14:21.622822  238674 start.go:309] selected driver: docker
	I1228 07:14:21.622842  238674 start.go:928] validating driver "docker" against <nil>
	I1228 07:14:21.622867  238674 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:14:21.623672  238674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:14:21.676457  238674 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:14:21.667211937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:14:21.676611  238674 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:14:21.676826  238674 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1228 07:14:21.679761  238674 out.go:179] * Using Docker driver with root privileges
	I1228 07:14:21.682680  238674 cni.go:84] Creating CNI manager for ""
	I1228 07:14:21.682756  238674 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:14:21.682769  238674 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:14:21.682848  238674 start.go:353] cluster config:
	{Name:docker-flags-974112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-974112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:14:21.686022  238674 out.go:179] * Starting "docker-flags-974112" primary control-plane node in "docker-flags-974112" cluster
	I1228 07:14:21.688888  238674 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:14:21.691851  238674 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:14:21.694834  238674 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:14:21.694889  238674 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1228 07:14:21.694904  238674 cache.go:65] Caching tarball of preloaded images
	I1228 07:14:21.694921  238674 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:14:21.695005  238674 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:14:21.695015  238674 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:14:21.695129  238674 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/docker-flags-974112/config.json ...
	I1228 07:14:21.695145  238674 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/docker-flags-974112/config.json: {Name:mk6cba84f3d902f4079b5b5328111f916ed3e3de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:14:21.713913  238674 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:14:21.713936  238674 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:14:21.713957  238674 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:14:21.713989  238674 start.go:360] acquireMachinesLock for docker-flags-974112: {Name:mkb59a147fc69d050468884d4c5766ddc83a8325 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:14:21.714109  238674 start.go:364] duration metric: took 99.464µs to acquireMachinesLock for "docker-flags-974112"
	I1228 07:14:21.714136  238674 start.go:93] Provisioning new machine with config: &{Name:docker-flags-974112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-974112 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:14:21.714208  238674 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:14:22.104356  226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:14:22.104394  226337 kubeadm.go:319] 
	I1228 07:14:22.104466  226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:14:22.105084  226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:14:22.105135  226337 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:14:22.105225  226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:14:22.105279  226337 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:14:22.105313  226337 kubeadm.go:319] OS: Linux
	I1228 07:14:22.105359  226337 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:14:22.105408  226337 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:14:22.105455  226337 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:14:22.105503  226337 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:14:22.105551  226337 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:14:22.105600  226337 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:14:22.105645  226337 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:14:22.105693  226337 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:14:22.105739  226337 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:14:22.105812  226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:14:22.105907  226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:14:22.105996  226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:14:22.106058  226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:14:22.109591  226337 out.go:252]   - Generating certificates and keys ...
	I1228 07:14:22.109685  226337 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:14:22.109750  226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:14:22.109825  226337 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:14:22.109891  226337 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:14:22.109960  226337 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:14:22.110013  226337 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:14:22.110076  226337 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:14:22.110138  226337 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:14:22.110212  226337 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:14:22.110285  226337 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:14:22.110323  226337 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:14:22.110393  226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:14:22.110444  226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:14:22.110501  226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:14:22.110554  226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:14:22.110617  226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:14:22.110671  226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:14:22.110755  226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:14:22.110820  226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:14:22.113628  226337 out.go:252]   - Booting up control plane ...
	I1228 07:14:22.113810  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:14:22.113959  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:14:22.114042  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:14:22.114156  226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:14:22.114258  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:14:22.114370  226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:14:22.114461  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:14:22.114503  226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:14:22.114643  226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:14:22.114755  226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:14:22.114825  226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000416054s
	I1228 07:14:22.114829  226337 kubeadm.go:319] 
	I1228 07:14:22.114889  226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:14:22.114944  226337 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:14:22.115058  226337 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:14:22.115063  226337 kubeadm.go:319] 
	I1228 07:14:22.115176  226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:14:22.115211  226337 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:14:22.115243  226337 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:14:22.115303  226337 kubeadm.go:403] duration metric: took 8m5.594328388s to StartCluster
	I1228 07:14:22.115380  226337 ssh_runner.go:195] Run: sudo runc list -f json
	I1228 07:14:22.115458  226337 kubeadm.go:319] 
	E1228 07:14:22.129689  226337 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.129812  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.144714  226337 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.144779  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.157963  226337 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.158032  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.178330  226337 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.178399  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.207266  226337 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.207333  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.238212  226337 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.238280  226337 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:22.254139  226337 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:22Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:22.254169  226337 logs.go:123] Gathering logs for kubelet ...
	I1228 07:14:22.254181  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:14:22.337714  226337 logs.go:123] Gathering logs for dmesg ...
	I1228 07:14:22.337754  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:14:22.358014  226337 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:14:22.358048  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:14:22.449954  226337 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:22.438828    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.440075    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.441129    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443104    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443902    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:14:22.438828    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.440075    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.441129    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443104    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:22.443902    5500 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:14:22.449975  226337 logs.go:123] Gathering logs for Docker ...
	I1228 07:14:22.449988  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:14:22.475409  226337 logs.go:123] Gathering logs for container status ...
	I1228 07:14:22.475444  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:14:22.540684  226337 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:14:22.540729  226337 out.go:285] * 
	W1228 07:14:22.540781  226337 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:22.540794  226337 out.go:285] * 
	W1228 07:14:22.541043  226337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:14:22.548492  226337 out.go:203] 
	W1228 07:14:22.550664  226337 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000416054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:22.550723  226337 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:14:22.550747  226337 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:14:22.553893  226337 out.go:203] 
	
	
	==> Docker <==
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.528721133Z" level=info msg="Restoring containers: start."
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.556618574Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.580654942Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.753866545Z" level=info msg="Loading containers: done."
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.769990297Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.770166858Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.770266330Z" level=info msg="Initializing buildkit"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.790263267Z" level=info msg="Completed buildkit initialization"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.795789002Z" level=info msg="Daemon has completed initialization"
	Dec 28 07:06:13 force-systemd-flag-649810 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805442276Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805588609Z" level=info msg="API listen on /run/docker.sock"
	Dec 28 07:06:13 force-systemd-flag-649810 dockerd[1146]: time="2025-12-28T07:06:13.805605306Z" level=info msg="API listen on [::]:2376"
	Dec 28 07:06:14 force-systemd-flag-649810 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Loaded network plugin cni"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Setting cgroupDriver systemd"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 28 07:06:14 force-systemd-flag-649810 cri-dockerd[1431]: time="2025-12-28T07:06:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 28 07:06:14 force-systemd-flag-649810 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:24.208502    5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:24.209210    5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:24.210898    5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:24.211504    5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:24.213121    5650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015148] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500432] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034760] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.784008] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.137634] kauditd_printk_skb: 36 callbacks suppressed
	[Dec28 06:42] hrtimer: interrupt took 11242004 ns
	
	
	==> kernel <==
	 07:14:24 up 56 min,  0 user,  load average: 0.63, 0.93, 1.82
	Linux force-systemd-flag-649810 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:14:20 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:21 force-systemd-flag-649810 kubelet[5433]: E1228 07:14:21.561026    5433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:21 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:22 force-systemd-flag-649810 kubelet[5482]: E1228 07:14:22.343595    5482 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:22 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:23 force-systemd-flag-649810 kubelet[5532]: E1228 07:14:23.148283    5532 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:23 force-systemd-flag-649810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:24 force-systemd-flag-649810 kubelet[5632]: E1228 07:14:24.120612    5632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:24 force-systemd-flag-649810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:24 force-systemd-flag-649810 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-649810 -n force-systemd-flag-649810
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-649810 -n force-systemd-flag-649810: exit status 6 (474.282279ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:14:24.819902  239370 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-649810" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-649810" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-649810" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-649810
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-649810: (2.073436115s)
--- FAIL: TestForceSystemdFlag (507.14s)

                                                
                                    
x
+
TestForceSystemdEnv (507.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m23.485506379s)

                                                
                                                
-- stdout --
	* [force-systemd-env-475689] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-475689" primary control-plane node in "force-systemd-env-475689" cluster
	* Pulling base image v0.0.48-1766884053-22351 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 07:05:54.177959  224989 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:54.178144  224989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:54.178172  224989 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:54.178194  224989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:54.178614  224989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:05:54.179168  224989 out.go:368] Setting JSON to false
	I1228 07:05:54.180154  224989 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2904,"bootTime":1766902651,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 07:05:54.180362  224989 start.go:143] virtualization:  
	I1228 07:05:54.183750  224989 out.go:179] * [force-systemd-env-475689] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:05:54.187559  224989 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:54.187656  224989 notify.go:221] Checking for updates...
	I1228 07:05:54.193090  224989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:54.195979  224989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 07:05:54.198846  224989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 07:05:54.201744  224989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:05:54.204596  224989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1228 07:05:54.208043  224989 config.go:182] Loaded profile config "offline-docker-575789": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:05:54.208149  224989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:54.233633  224989 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:05:54.233792  224989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:54.319820  224989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:05:54.309635715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:05:54.319917  224989 docker.go:319] overlay module found
	I1228 07:05:54.323025  224989 out.go:179] * Using the docker driver based on user configuration
	I1228 07:05:54.326015  224989 start.go:309] selected driver: docker
	I1228 07:05:54.326054  224989 start.go:928] validating driver "docker" against <nil>
	I1228 07:05:54.326088  224989 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:05:54.326885  224989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:05:54.388015  224989 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-28 07:05:54.378654796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:05:54.388223  224989 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:05:54.388443  224989 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:05:54.391410  224989 out.go:179] * Using Docker driver with root privileges
	I1228 07:05:54.394288  224989 cni.go:84] Creating CNI manager for ""
	I1228 07:05:54.394371  224989 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:05:54.394392  224989 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:05:54.394480  224989 start.go:353] cluster config:
	{Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:05:54.397590  224989 out.go:179] * Starting "force-systemd-env-475689" primary control-plane node in "force-systemd-env-475689" cluster
	I1228 07:05:54.400577  224989 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:05:54.403650  224989 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:05:54.406485  224989 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:05:54.406546  224989 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1228 07:05:54.406562  224989 cache.go:65] Caching tarball of preloaded images
	I1228 07:05:54.406569  224989 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:05:54.406648  224989 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:05:54.406658  224989 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:05:54.406778  224989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/config.json ...
	I1228 07:05:54.406804  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/config.json: {Name:mk582db13f500af8ef9a5a6be25aa36714bb381a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:05:54.425027  224989 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:05:54.425052  224989 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:05:54.425072  224989 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:05:54.425103  224989 start.go:360] acquireMachinesLock for force-systemd-env-475689: {Name:mk1787fadb5f8e4dd0c8801f2fd116cbb20b2f57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:05:54.425212  224989 start.go:364] duration metric: took 89.659µs to acquireMachinesLock for "force-systemd-env-475689"
	I1228 07:05:54.425241  224989 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:05:54.425312  224989 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:05:54.428753  224989 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:05:54.429003  224989 start.go:159] libmachine.API.Create for "force-systemd-env-475689" (driver="docker")
	I1228 07:05:54.429043  224989 client.go:173] LocalClient.Create starting
	I1228 07:05:54.429126  224989 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem
	I1228 07:05:54.429267  224989 main.go:144] libmachine: Decoding PEM data...
	I1228 07:05:54.429298  224989 main.go:144] libmachine: Parsing certificate...
	I1228 07:05:54.429363  224989 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem
	I1228 07:05:54.429386  224989 main.go:144] libmachine: Decoding PEM data...
	I1228 07:05:54.429401  224989 main.go:144] libmachine: Parsing certificate...
	I1228 07:05:54.429777  224989 cli_runner.go:164] Run: docker network inspect force-systemd-env-475689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:05:54.463093  224989 cli_runner.go:211] docker network inspect force-systemd-env-475689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:05:54.463183  224989 network_create.go:284] running [docker network inspect force-systemd-env-475689] to gather additional debugging logs...
	I1228 07:05:54.463206  224989 cli_runner.go:164] Run: docker network inspect force-systemd-env-475689
	W1228 07:05:54.490882  224989 cli_runner.go:211] docker network inspect force-systemd-env-475689 returned with exit code 1
	I1228 07:05:54.490918  224989 network_create.go:287] error running [docker network inspect force-systemd-env-475689]: docker network inspect force-systemd-env-475689: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-475689 not found
	I1228 07:05:54.490931  224989 network_create.go:289] output of [docker network inspect force-systemd-env-475689]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-475689 not found
	
	** /stderr **
	I1228 07:05:54.491085  224989 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:05:54.507997  224989 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e663f46973f0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e5:53:aa:f4:ad} reservation:<nil>}
	I1228 07:05:54.508355  224989 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad53498571c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:ea:8c:9a:c6:5d} reservation:<nil>}
	I1228 07:05:54.508647  224989 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b73d9f306bb6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:7e:31:bd:ea:20} reservation:<nil>}
	I1228 07:05:54.508956  224989 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19ac59844b90 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:c6:00:06:80:23} reservation:<nil>}
	I1228 07:05:54.509392  224989 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a1c220}
	I1228 07:05:54.509416  224989 network_create.go:124] attempt to create docker network force-systemd-env-475689 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1228 07:05:54.509473  224989 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-475689 force-systemd-env-475689
	I1228 07:05:54.575824  224989 network_create.go:108] docker network force-systemd-env-475689 192.168.85.0/24 created
	I1228 07:05:54.575852  224989 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-475689" container
	I1228 07:05:54.575938  224989 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:05:54.592964  224989 cli_runner.go:164] Run: docker volume create force-systemd-env-475689 --label name.minikube.sigs.k8s.io=force-systemd-env-475689 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:05:54.610496  224989 oci.go:103] Successfully created a docker volume force-systemd-env-475689
	I1228 07:05:54.610756  224989 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-475689-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-475689 --entrypoint /usr/bin/test -v force-systemd-env-475689:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:05:55.180493  224989 oci.go:107] Successfully prepared a docker volume force-systemd-env-475689
	I1228 07:05:55.180565  224989 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:05:55.180581  224989 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:05:55.180660  224989 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-475689:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:05:58.721899  224989 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-475689:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.541196934s)
	I1228 07:05:58.721933  224989 kic.go:203] duration metric: took 3.541347368s to extract preloaded images to volume ...
	W1228 07:05:58.722124  224989 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:05:58.722248  224989 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:05:58.829336  224989 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-475689 --name force-systemd-env-475689 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-475689 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-475689 --network force-systemd-env-475689 --ip 192.168.85.2 --volume force-systemd-env-475689:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:05:59.328462  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Running}}
	I1228 07:05:59.360996  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.389716  224989 cli_runner.go:164] Run: docker exec force-systemd-env-475689 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:05:59.467783  224989 oci.go:144] the created container "force-systemd-env-475689" has a running status.
	I1228 07:05:59.467818  224989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa...
	I1228 07:05:59.713219  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:05:59.713314  224989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:05:59.763427  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.790487  224989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:05:59.790509  224989 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-475689 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:05:59.886261  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.925180  224989 machine.go:94] provisionDockerMachine start ...
	I1228 07:05:59.925262  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:05:59.950910  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:59.951247  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:05:59.951258  224989 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:05:59.952424  224989 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:06:03.128429  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-475689
	
	I1228 07:06:03.128460  224989 ubuntu.go:182] provisioning hostname "force-systemd-env-475689"
	I1228 07:06:03.128525  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.149541  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:03.149892  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:03.149905  224989 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-475689 && echo "force-systemd-env-475689" | sudo tee /etc/hostname
	I1228 07:06:03.304349  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-475689
	
	I1228 07:06:03.304446  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.325342  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:03.325647  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:03.325675  224989 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-475689' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-475689/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-475689' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:06:03.464661  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:06:03.464689  224989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2382/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2382/.minikube}
	I1228 07:06:03.464708  224989 ubuntu.go:190] setting up certificates
	I1228 07:06:03.464730  224989 provision.go:84] configureAuth start
	I1228 07:06:03.464799  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:03.482178  224989 provision.go:143] copyHostCerts
	I1228 07:06:03.482222  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:03.482257  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem, removing ...
	I1228 07:06:03.482270  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:03.482341  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem (1082 bytes)
	I1228 07:06:03.482447  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:03.482470  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem, removing ...
	I1228 07:06:03.482479  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:03.482508  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem (1123 bytes)
	I1228 07:06:03.482552  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:03.482580  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem, removing ...
	I1228 07:06:03.482587  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:03.482611  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem (1675 bytes)
	I1228 07:06:03.482657  224989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-475689 san=[127.0.0.1 192.168.85.2 force-systemd-env-475689 localhost minikube]
	I1228 07:06:03.869250  224989 provision.go:177] copyRemoteCerts
	I1228 07:06:03.869321  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:06:03.869367  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.887818  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:03.985456  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:06:03.985514  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:06:04.007148  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:06:04.007242  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:06:04.034943  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:06:04.035002  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:06:04.054890  224989 provision.go:87] duration metric: took 590.141047ms to configureAuth
	I1228 07:06:04.054916  224989 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:06:04.055093  224989 config.go:182] Loaded profile config "force-systemd-env-475689": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:06:04.055153  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.073305  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.073633  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.073642  224989 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:06:04.213284  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:06:04.213307  224989 ubuntu.go:71] root file system type: overlay
	I1228 07:06:04.213415  224989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:06:04.213486  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.231761  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.232085  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.232427  224989 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:06:04.378355  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:06:04.378450  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.396302  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.396616  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.396638  224989 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:06:05.918150  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:06:04.373149855 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:06:05.918183  224989 machine.go:97] duration metric: took 5.992975168s to provisionDockerMachine
	I1228 07:06:05.918197  224989 client.go:176] duration metric: took 11.489142395s to LocalClient.Create
	I1228 07:06:05.918211  224989 start.go:167] duration metric: took 11.489209891s to libmachine.API.Create "force-systemd-env-475689"
	I1228 07:06:05.918223  224989 start.go:293] postStartSetup for "force-systemd-env-475689" (driver="docker")
	I1228 07:06:05.918233  224989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:06:05.918300  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:06:05.918341  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:05.936923  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.037317  224989 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:06:06.040993  224989 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:06:06.041070  224989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:06:06.041098  224989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/addons for local assets ...
	I1228 07:06:06.041165  224989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/files for local assets ...
	I1228 07:06:06.041266  224989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> 42022.pem in /etc/ssl/certs
	I1228 07:06:06.041277  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /etc/ssl/certs/42022.pem
	I1228 07:06:06.041379  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:06:06.049144  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:06.067741  224989 start.go:296] duration metric: took 149.503869ms for postStartSetup
	I1228 07:06:06.068147  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:06.088885  224989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/config.json ...
	I1228 07:06:06.089181  224989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:06:06.089238  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.105851  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.201431  224989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:06:06.206103  224989 start.go:128] duration metric: took 11.780778105s to createHost
	I1228 07:06:06.206132  224989 start.go:83] releasing machines lock for "force-systemd-env-475689", held for 11.780905532s
	I1228 07:06:06.206206  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:06.222595  224989 ssh_runner.go:195] Run: cat /version.json
	I1228 07:06:06.222646  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.222662  224989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:06:06.222723  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.245805  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.246876  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.434510  224989 ssh_runner.go:195] Run: systemctl --version
	I1228 07:06:06.444141  224989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:06:06.448686  224989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:06:06.448805  224989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:06:06.476276  224989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:06:06.476303  224989 start.go:496] detecting cgroup driver to use...
	I1228 07:06:06.476322  224989 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:06.476424  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:06.491123  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:06:06.500320  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:06:06.509716  224989 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:06:06.509837  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:06:06.520435  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:06.534403  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:06:06.543771  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:06.552900  224989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:06:06.561817  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:06:06.570747  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:06:06.579476  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:06:06.588731  224989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:06:06.596447  224989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:06:06.604065  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:06.725227  224989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:06:06.834233  224989 start.go:496] detecting cgroup driver to use...
	I1228 07:06:06.834267  224989 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:06.834323  224989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:06:06.877179  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:06.898341  224989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:06:06.932391  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:06.952923  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:06:06.970384  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:06.995883  224989 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:06:07.001457  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:06:07.018758  224989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:06:07.039835  224989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:06:07.192544  224989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:06:07.338234  224989 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:06:07.338335  224989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:06:07.356139  224989 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:06:07.371077  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:07.508715  224989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:06:07.920426  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:06:07.935127  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:06:07.950017  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:07.964123  224989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:06:08.088410  224989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:06:08.210129  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.328970  224989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:06:08.345517  224989 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:06:08.358574  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.482416  224989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:06:08.552004  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:08.567619  224989 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:06:08.567689  224989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:06:08.571941  224989 start.go:574] Will wait 60s for crictl version
	I1228 07:06:08.572000  224989 ssh_runner.go:195] Run: which crictl
	I1228 07:06:08.575839  224989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:06:08.600600  224989 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:06:08.600670  224989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:08.622422  224989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:08.650037  224989 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:06:08.650144  224989 cli_runner.go:164] Run: docker network inspect force-systemd-env-475689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:08.666832  224989 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:06:08.671032  224989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:08.681539  224989 kubeadm.go:884] updating cluster {Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:06:08.681653  224989 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:08.681711  224989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:08.700176  224989 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:08.700227  224989 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:06:08.700292  224989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:08.718572  224989 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:08.718601  224989 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:06:08.718612  224989 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1228 07:06:08.718708  224989 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-475689 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:06:08.718787  224989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:06:08.770196  224989 cni.go:84] Creating CNI manager for ""
	I1228 07:06:08.770222  224989 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:08.770242  224989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:06:08.770267  224989 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-475689 NodeName:force-systemd-env-475689 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:06:08.770396  224989 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-475689"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:06:08.770467  224989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:06:08.778485  224989 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:06:08.778563  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:06:08.786300  224989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1228 07:06:08.799271  224989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:06:08.813014  224989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1228 07:06:08.826208  224989 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:06:08.830159  224989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:08.840358  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.984246  224989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:06:09.020994  224989 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689 for IP: 192.168.85.2
	I1228 07:06:09.021013  224989 certs.go:195] generating shared ca certs ...
	I1228 07:06:09.021029  224989 certs.go:227] acquiring lock for ca certs: {Name:mkb08779780dcf6b96f2c93a4ec9c28968a3dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.021172  224989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key
	I1228 07:06:09.021215  224989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key
	I1228 07:06:09.021222  224989 certs.go:257] generating profile certs ...
	I1228 07:06:09.021277  224989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key
	I1228 07:06:09.021296  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt with IP's: []
	I1228 07:06:09.313580  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt ...
	I1228 07:06:09.313614  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt: {Name:mk226378e6b56b52aadfd2ded9c681fe9c5660f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.313849  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key ...
	I1228 07:06:09.313868  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key: {Name:mkcc683b6246ddbcb384c8b2698e7b4e6bc914ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.314021  224989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178
	I1228 07:06:09.314041  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 07:06:09.463095  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 ...
	I1228 07:06:09.463128  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178: {Name:mkfde4390a73a278f9a7c15f9f5fdc700e21d8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.463358  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178 ...
	I1228 07:06:09.463373  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178: {Name:mkecaae751c91f71a1173b180132498bec7d17df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.463474  224989 certs.go:382] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt
	I1228 07:06:09.463557  224989 certs.go:386] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key
	I1228 07:06:09.463618  224989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key
	I1228 07:06:09.463637  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt with IP's: []
	I1228 07:06:10.009976  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt ...
	I1228 07:06:10.010371  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt: {Name:mk292857f257843736c8a536a36ea0671e82d753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:10.010674  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key ...
	I1228 07:06:10.010718  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key: {Name:mkc34bf3df42643e166dafc6a7c7b665e63ec741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:10.010906  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:06:10.010964  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:06:10.010998  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:06:10.011046  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:06:10.011084  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:06:10.011118  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:06:10.011165  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:06:10.011206  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:06:10.011314  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem (1338 bytes)
	W1228 07:06:10.011379  224989 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202_empty.pem, impossibly tiny 0 bytes
	I1228 07:06:10.011419  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem (1679 bytes)
	I1228 07:06:10.011470  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:06:10.011527  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:06:10.011576  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem (1675 bytes)
	I1228 07:06:10.011669  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:10.011728  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.011775  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem -> /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.011810  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.012461  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:06:10.041564  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1228 07:06:10.069211  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:06:10.099395  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:06:10.125275  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:06:10.145764  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:06:10.166815  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:06:10.188644  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:06:10.210572  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:06:10.232377  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem --> /usr/share/ca-certificates/4202.pem (1338 bytes)
	I1228 07:06:10.258152  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /usr/share/ca-certificates/42022.pem (1708 bytes)
	I1228 07:06:10.278721  224989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:06:10.294217  224989 ssh_runner.go:195] Run: openssl version
	I1228 07:06:10.301133  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.308978  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:06:10.316640  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.320690  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.320759  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.364308  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:06:10.377244  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:06:10.387661  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.397975  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4202.pem /etc/ssl/certs/4202.pem
	I1228 07:06:10.406609  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.411012  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.411075  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.460006  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:06:10.469064  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4202.pem /etc/ssl/certs/51391683.0
	I1228 07:06:10.479105  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.487833  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42022.pem /etc/ssl/certs/42022.pem
	I1228 07:06:10.498878  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.504606  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.504673  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.567710  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:10.577669  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42022.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:10.585672  224989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:06:10.590990  224989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:06:10.591091  224989 kubeadm.go:401] StartCluster: {Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:10.591254  224989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:06:10.620859  224989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:06:10.633799  224989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:06:10.644816  224989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:06:10.644931  224989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:06:10.657320  224989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:06:10.657399  224989 kubeadm.go:158] found existing configuration files:
	
	I1228 07:06:10.657484  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:06:10.667230  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:06:10.667344  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:06:10.676219  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:06:10.686094  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:06:10.686210  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:06:10.694628  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:06:10.703822  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:06:10.703940  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:06:10.712262  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:06:10.721927  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:06:10.722063  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:06:10.731589  224989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:06:10.772893  224989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:06:10.773092  224989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:06:10.884593  224989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:06:10.884763  224989 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:06:10.884831  224989 kubeadm.go:319] OS: Linux
	I1228 07:06:10.884886  224989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:06:10.884938  224989 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:06:10.884990  224989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:06:10.885041  224989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:06:10.885093  224989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:06:10.885144  224989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:06:10.885194  224989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:06:10.885246  224989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:06:10.885296  224989 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:06:10.981004  224989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:06:10.981127  224989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:06:10.981225  224989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:06:10.996930  224989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:06:11.003793  224989 out.go:252]   - Generating certificates and keys ...
	I1228 07:06:11.003980  224989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:06:11.004103  224989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:06:11.101359  224989 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:06:11.625276  224989 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:06:12.079231  224989 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:06:12.156403  224989 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:06:12.372673  224989 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:06:12.372817  224989 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:06:12.763495  224989 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:06:12.763790  224989 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:06:13.220874  224989 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:06:13.320720  224989 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:06:13.640940  224989 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:06:13.641017  224989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:06:13.820509  224989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:06:14.044125  224989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:06:14.785380  224989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:06:15.040596  224989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:06:15.253841  224989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:06:15.253946  224989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:06:15.254018  224989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:06:15.257399  224989 out.go:252]   - Booting up control plane ...
	I1228 07:06:15.257504  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:06:15.257581  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:06:15.257650  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:06:15.284790  224989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:06:15.284950  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:06:15.294293  224989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:06:15.298846  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:06:15.298926  224989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:06:15.485640  224989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:06:15.485763  224989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:10:15.486766  224989 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001236445s
	I1228 07:10:15.486809  224989 kubeadm.go:319] 
	I1228 07:10:15.486868  224989 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:10:15.486901  224989 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:10:15.487013  224989 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:10:15.487023  224989 kubeadm.go:319] 
	I1228 07:10:15.487136  224989 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:10:15.487171  224989 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:10:15.487204  224989 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:10:15.487209  224989 kubeadm.go:319] 
	I1228 07:10:15.491521  224989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:15.491947  224989 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:15.492061  224989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:10:15.492317  224989 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:10:15.492330  224989 kubeadm.go:319] 
	I1228 07:10:15.492406  224989 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:10:15.492558  224989 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001236445s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001236445s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:10:15.492645  224989 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:10:15.929418  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:10:15.943183  224989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:10:15.943253  224989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:10:15.951638  224989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:10:15.951662  224989 kubeadm.go:158] found existing configuration files:
	
	I1228 07:10:15.951723  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:10:15.959911  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:10:15.959978  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:10:15.967770  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:10:15.976086  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:10:15.976157  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:10:15.984175  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:10:15.992276  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:10:15.992347  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:10:16.001958  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:10:16.013995  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:10:16.014062  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:10:16.023133  224989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:10:16.074115  224989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:10:16.074339  224989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:10:16.150923  224989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:10:16.150997  224989 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:10:16.151033  224989 kubeadm.go:319] OS: Linux
	I1228 07:10:16.151080  224989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:10:16.151128  224989 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:10:16.151176  224989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:10:16.151224  224989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:10:16.151272  224989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:10:16.151320  224989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:10:16.151367  224989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:10:16.151416  224989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:10:16.151462  224989 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:10:16.220799  224989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:10:16.220924  224989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:10:16.221027  224989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:10:16.234952  224989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:10:16.240621  224989 out.go:252]   - Generating certificates and keys ...
	I1228 07:10:16.240734  224989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:10:16.240813  224989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:10:16.240895  224989 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:10:16.240965  224989 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:10:16.241080  224989 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:10:16.241179  224989 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:10:16.241283  224989 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:10:16.241398  224989 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:10:16.241511  224989 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:10:16.241637  224989 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:10:16.241715  224989 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:10:16.241807  224989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:10:16.339903  224989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:10:16.724907  224989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:10:16.794526  224989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:10:16.947622  224989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:10:17.124090  224989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:10:17.124968  224989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:10:17.127679  224989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:10:17.133057  224989 out.go:252]   - Booting up control plane ...
	I1228 07:10:17.133167  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:10:17.133245  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:10:17.133313  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:10:17.151708  224989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:10:17.151859  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:10:17.160427  224989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:10:17.161351  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:10:17.161581  224989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:10:17.294931  224989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:10:17.295052  224989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:14:17.295571  224989 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001050371s
	I1228 07:14:17.295599  224989 kubeadm.go:319] 
	I1228 07:14:17.295657  224989 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:14:17.295694  224989 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:14:17.295833  224989 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:14:17.295849  224989 kubeadm.go:319] 
	I1228 07:14:17.295955  224989 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:14:17.295988  224989 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:14:17.296019  224989 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:14:17.296023  224989 kubeadm.go:319] 
	I1228 07:14:17.299652  224989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:14:17.300134  224989 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:14:17.300273  224989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:14:17.300566  224989 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:14:17.300581  224989 kubeadm.go:319] 
	I1228 07:14:17.300718  224989 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:14:17.300738  224989 kubeadm.go:403] duration metric: took 8m6.709649294s to StartCluster
	I1228 07:14:17.300826  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.313111  224989 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.313186  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.324379  224989 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.324445  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.335616  224989 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.335682  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.346699  224989 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.346770  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.357801  224989 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.357874  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.368852  224989 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.368918  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.380328  224989 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.380354  224989 logs.go:123] Gathering logs for kubelet ...
	I1228 07:14:17.380365  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:14:17.438037  224989 logs.go:123] Gathering logs for dmesg ...
	I1228 07:14:17.438073  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:14:17.453671  224989 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:14:17.453699  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:14:17.528877  224989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:17.519796    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.520230    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522445    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522831    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.524536    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:14:17.519796    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.520230    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522445    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522831    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.524536    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:14:17.528911  224989 logs.go:123] Gathering logs for Docker ...
	I1228 07:14:17.528924  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:14:17.554197  224989 logs.go:123] Gathering logs for container status ...
	I1228 07:14:17.554232  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:14:17.598393  224989 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:14:17.598446  224989 out.go:285] * 
	* 
	W1228 07:14:17.598529  224989 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:17.598551  224989 out.go:285] * 
	* 
	W1228 07:14:17.598802  224989 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:14:17.605739  224989 out.go:203] 
	W1228 07:14:17.608493  224989 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:17.608549  224989 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:14:17.608573  224989 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:14:17.611807  224989 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-475689 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-28 07:14:18.050112893 +0000 UTC m=+2780.026734961
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-475689
helpers_test.go:244: (dbg) docker inspect force-systemd-env-475689:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875",
	        "Created": "2025-12-28T07:05:58.846434155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-28T07:05:58.962391695Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02c8841c3fcd0bf74c67c65bf527029b123b3355e8bc89181a31113e9282ee3c",
	        "ResolvConfPath": "/var/lib/docker/containers/1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875/hosts",
	        "LogPath": "/var/lib/docker/containers/1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875/1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875-json.log",
	        "Name": "/force-systemd-env-475689",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-475689:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-475689",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ad9838305d426ca0c10f7cdde5ee0fa0f25ca735943a48c528ec4916a78d875",
	                "LowerDir": "/var/lib/docker/overlay2/081a62a836ebe622e39001cb83bec2bc2c8b2da879bb50150ca5543f7110c030-init/diff:/var/lib/docker/overlay2/ecb99d95c7e8ff1804547a73cc82a9ed1888766e4e833c4a7b53fdf298df8f33/diff",
	                "MergedDir": "/var/lib/docker/overlay2/081a62a836ebe622e39001cb83bec2bc2c8b2da879bb50150ca5543f7110c030/merged",
	                "UpperDir": "/var/lib/docker/overlay2/081a62a836ebe622e39001cb83bec2bc2c8b2da879bb50150ca5543f7110c030/diff",
	                "WorkDir": "/var/lib/docker/overlay2/081a62a836ebe622e39001cb83bec2bc2c8b2da879bb50150ca5543f7110c030/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-475689",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-475689/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-475689",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-475689",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-475689",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0f828353870cc8a78a6bedaaf338a2996da7c3ebca214bd4209a118938f784d",
	            "SandboxKey": "/var/run/docker/netns/e0f828353870",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32994"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32995"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32998"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32996"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32997"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-475689": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:ce:d4:e7:0f:ba",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c063b6da8585ad3de4da50404253fd4b7d5876259e454d3bf64d3847923b756a",
	                    "EndpointID": "7dd005e2d11b17b70a8b309725ff52ba11ee5559f7914d237135878338493a85",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-475689",
	                        "1ad9838305d4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-475689 -n force-systemd-env-475689
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-475689 -n force-systemd-env-475689: exit status 6 (295.487567ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:14:18.351054  238100 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-475689" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-475689 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-436830 sudo cat /etc/kubernetes/kubelet.conf                                                                        │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /var/lib/kubelet/config.yaml                                                                        │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status docker --all --full --no-pager                                                         │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat docker --no-pager                                                                         │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/docker/daemon.json                                                                             │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo docker system info                                                                                      │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cri-dockerd --version                                                                                   │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat containerd --no-pager                                                                     │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo cat /etc/containerd/config.toml                                                                         │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo containerd config dump                                                                                  │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo systemctl cat crio --no-pager                                                                           │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ -p cilium-436830 sudo crio config                                                                                             │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p cilium-436830                                                                                                              │ cilium-436830             │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p force-systemd-env-475689 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-475689  │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ delete  │ -p offline-docker-575789                                                                                                      │ offline-docker-575789     │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │ 28 Dec 25 07:05 UTC │
	│ start   │ -p force-systemd-flag-649810 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-649810 │ jenkins │ v1.37.0 │ 28 Dec 25 07:05 UTC │                     │
	│ ssh     │ force-systemd-env-475689 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-475689  │ jenkins │ v1.37.0 │ 28 Dec 25 07:14 UTC │ 28 Dec 25 07:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 07:05:59
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 07:05:59.908381  226337 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:05:59.908505  226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:59.908511  226337 out.go:374] Setting ErrFile to fd 2...
	I1228 07:05:59.908515  226337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:05:59.908870  226337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:05:59.909316  226337 out.go:368] Setting JSON to false
	I1228 07:05:59.911797  226337 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2909,"bootTime":1766902651,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 07:05:59.911876  226337 start.go:143] virtualization:  
	I1228 07:05:59.916290  226337 out.go:179] * [force-systemd-flag-649810] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 07:05:59.920371  226337 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 07:05:59.920602  226337 notify.go:221] Checking for updates...
	I1228 07:05:59.930233  226337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 07:05:59.933365  226337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 07:05:59.936750  226337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 07:05:59.939782  226337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 07:05:59.943014  226337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 07:05:59.946329  226337 config.go:182] Loaded profile config "force-systemd-env-475689": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:05:59.946460  226337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 07:05:59.989184  226337 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 07:05:59.989298  226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:06:00.147313  226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.132170795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:06:00.147445  226337 docker.go:319] overlay module found
	I1228 07:06:00.151222  226337 out.go:179] * Using the docker driver based on user configuration
	I1228 07:06:00.154382  226337 start.go:309] selected driver: docker
	I1228 07:06:00.154405  226337 start.go:928] validating driver "docker" against <nil>
	I1228 07:06:00.154420  226337 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 07:06:00.155281  226337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 07:06:00.370917  226337 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-12-28 07:06:00.355782962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 07:06:00.371072  226337 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 07:06:00.371298  226337 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 07:06:00.374992  226337 out.go:179] * Using Docker driver with root privileges
	I1228 07:06:00.377986  226337 cni.go:84] Creating CNI manager for ""
	I1228 07:06:00.378068  226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:00.378082  226337 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 07:06:00.378160  226337 start.go:353] cluster config:
	{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:00.381388  226337 out.go:179] * Starting "force-systemd-flag-649810" primary control-plane node in "force-systemd-flag-649810" cluster
	I1228 07:06:00.384286  226337 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 07:06:00.387356  226337 out.go:179] * Pulling base image v0.0.48-1766884053-22351 ...
	I1228 07:06:00.390388  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:00.390453  226337 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1228 07:06:00.390466  226337 cache.go:65] Caching tarball of preloaded images
	I1228 07:06:00.390569  226337 preload.go:251] Found /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1228 07:06:00.390579  226337 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1228 07:06:00.390710  226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
	I1228 07:06:00.390748  226337 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 07:06:00.390743  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json: {Name:mkcc4924bc7430bc738783d3bc1ceb8a9cf9dbf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:00.418489  226337 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon, skipping pull
	I1228 07:06:00.418520  226337 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 exists in daemon, skipping load
	I1228 07:06:00.418536  226337 cache.go:243] Successfully downloaded all kic artifacts
	I1228 07:06:00.418570  226337 start.go:360] acquireMachinesLock for force-systemd-flag-649810: {Name:mka57d38f56a82b4b8389b88f726a058fa795922 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1228 07:06:00.418691  226337 start.go:364] duration metric: took 104.256µs to acquireMachinesLock for "force-systemd-flag-649810"
	I1228 07:06:00.418719  226337 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1228 07:06:00.418813  226337 start.go:125] createHost starting for "" (driver="docker")
	I1228 07:05:59.328462  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Running}}
	I1228 07:05:59.360996  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.389716  224989 cli_runner.go:164] Run: docker exec force-systemd-env-475689 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:05:59.467783  224989 oci.go:144] the created container "force-systemd-env-475689" has a running status.
	I1228 07:05:59.467818  224989 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa...
	I1228 07:05:59.713219  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:05:59.713314  224989 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:05:59.763427  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.790487  224989 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:05:59.790509  224989 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-475689 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:05:59.886261  224989 cli_runner.go:164] Run: docker container inspect force-systemd-env-475689 --format={{.State.Status}}
	I1228 07:05:59.925180  224989 machine.go:94] provisionDockerMachine start ...
	I1228 07:05:59.925262  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:05:59.950910  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:05:59.951247  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:05:59.951258  224989 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:05:59.952424  224989 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1228 07:06:03.128429  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-475689
	
	I1228 07:06:03.128460  224989 ubuntu.go:182] provisioning hostname "force-systemd-env-475689"
	I1228 07:06:03.128525  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.149541  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:03.149892  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:03.149905  224989 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-475689 && echo "force-systemd-env-475689" | sudo tee /etc/hostname
	I1228 07:06:03.304349  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-475689
	
	I1228 07:06:03.304446  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.325342  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:03.325647  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:03.325675  224989 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-475689' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-475689/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-475689' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:06:03.464661  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:06:03.464689  224989 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2382/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2382/.minikube}
	I1228 07:06:03.464708  224989 ubuntu.go:190] setting up certificates
	I1228 07:06:03.464730  224989 provision.go:84] configureAuth start
	I1228 07:06:03.464799  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:03.482178  224989 provision.go:143] copyHostCerts
	I1228 07:06:03.482222  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:03.482257  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem, removing ...
	I1228 07:06:03.482270  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:03.482341  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem (1082 bytes)
	I1228 07:06:03.482447  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:03.482470  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem, removing ...
	I1228 07:06:03.482479  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:03.482508  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem (1123 bytes)
	I1228 07:06:03.482552  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:03.482580  224989 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem, removing ...
	I1228 07:06:03.482587  224989 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:03.482611  224989 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem (1675 bytes)
	I1228 07:06:03.482657  224989 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-475689 san=[127.0.0.1 192.168.85.2 force-systemd-env-475689 localhost minikube]
	I1228 07:06:03.869250  224989 provision.go:177] copyRemoteCerts
	I1228 07:06:03.869321  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:06:03.869367  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:03.887818  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:03.985456  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:06:03.985514  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:06:04.007148  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:06:04.007242  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:06:04.034943  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:06:04.035002  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:06:04.054890  224989 provision.go:87] duration metric: took 590.141047ms to configureAuth
	I1228 07:06:04.054916  224989 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:06:04.055093  224989 config.go:182] Loaded profile config "force-systemd-env-475689": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:06:04.055153  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.073305  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.073633  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.073642  224989 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:06:00.426510  226337 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1228 07:06:00.426912  226337 start.go:159] libmachine.API.Create for "force-systemd-flag-649810" (driver="docker")
	I1228 07:06:00.426990  226337 client.go:173] LocalClient.Create starting
	I1228 07:06:00.427147  226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem
	I1228 07:06:00.427225  226337 main.go:144] libmachine: Decoding PEM data...
	I1228 07:06:00.427273  226337 main.go:144] libmachine: Parsing certificate...
	I1228 07:06:00.427370  226337 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem
	I1228 07:06:00.427427  226337 main.go:144] libmachine: Decoding PEM data...
	I1228 07:06:00.427455  226337 main.go:144] libmachine: Parsing certificate...
	I1228 07:06:00.428427  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1228 07:06:00.447890  226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1228 07:06:00.447979  226337 network_create.go:284] running [docker network inspect force-systemd-flag-649810] to gather additional debugging logs...
	I1228 07:06:00.448135  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810
	W1228 07:06:00.469959  226337 cli_runner.go:211] docker network inspect force-systemd-flag-649810 returned with exit code 1
	I1228 07:06:00.469990  226337 network_create.go:287] error running [docker network inspect force-systemd-flag-649810]: docker network inspect force-systemd-flag-649810: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-649810 not found
	I1228 07:06:00.470003  226337 network_create.go:289] output of [docker network inspect force-systemd-flag-649810]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-649810 not found
	
	** /stderr **
	I1228 07:06:00.470126  226337 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:00.492500  226337 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e663f46973f0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e5:53:aa:f4:ad} reservation:<nil>}
	I1228 07:06:00.492943  226337 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-ad53498571c9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7e:ea:8c:9a:c6:5d} reservation:<nil>}
	I1228 07:06:00.493252  226337 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b73d9f306bb6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:7e:31:bd:ea:20} reservation:<nil>}
	I1228 07:06:00.493666  226337 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197fcd0}
	I1228 07:06:00.493683  226337 network_create.go:124] attempt to create docker network force-systemd-flag-649810 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1228 07:06:00.493748  226337 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-649810 force-systemd-flag-649810
	I1228 07:06:00.576344  226337 network_create.go:108] docker network force-systemd-flag-649810 192.168.76.0/24 created
	I1228 07:06:00.576376  226337 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-649810" container
	I1228 07:06:00.576446  226337 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1228 07:06:00.593986  226337 cli_runner.go:164] Run: docker volume create force-systemd-flag-649810 --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true
	I1228 07:06:00.616436  226337 oci.go:103] Successfully created a docker volume force-systemd-flag-649810
	I1228 07:06:00.616534  226337 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-649810-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --entrypoint /usr/bin/test -v force-systemd-flag-649810:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -d /var/lib
	I1228 07:06:01.192754  226337 oci.go:107] Successfully prepared a docker volume force-systemd-flag-649810
	I1228 07:06:01.192824  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:01.192841  226337 kic.go:194] Starting extracting preloaded images to volume ...
	I1228 07:06:01.192909  226337 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1228 07:06:04.534678  226337 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-649810:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.341730036s)
	I1228 07:06:04.534706  226337 kic.go:203] duration metric: took 3.341862567s to extract preloaded images to volume ...
	W1228 07:06:04.534846  226337 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1228 07:06:04.534950  226337 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1228 07:06:04.616424  226337 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-649810 --name force-systemd-flag-649810 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-649810 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-649810 --network force-systemd-flag-649810 --ip 192.168.76.2 --volume force-systemd-flag-649810:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1
	I1228 07:06:04.213284  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:06:04.213307  224989 ubuntu.go:71] root file system type: overlay
	I1228 07:06:04.213415  224989 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:06:04.213486  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.231761  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.232085  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.232427  224989 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:06:04.378355  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:06:04.378450  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:04.396302  224989 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:04.396616  224989 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32994 <nil> <nil>}
	I1228 07:06:04.396638  224989 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:06:05.918150  224989 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:06:04.373149855 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:06:05.918183  224989 machine.go:97] duration metric: took 5.992975168s to provisionDockerMachine
	I1228 07:06:05.918197  224989 client.go:176] duration metric: took 11.489142395s to LocalClient.Create
	I1228 07:06:05.918211  224989 start.go:167] duration metric: took 11.489209891s to libmachine.API.Create "force-systemd-env-475689"
	I1228 07:06:05.918223  224989 start.go:293] postStartSetup for "force-systemd-env-475689" (driver="docker")
	I1228 07:06:05.918233  224989 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:06:05.918300  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:06:05.918341  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:05.936923  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.037317  224989 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:06:06.040993  224989 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:06:06.041070  224989 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:06:06.041098  224989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/addons for local assets ...
	I1228 07:06:06.041165  224989 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/files for local assets ...
	I1228 07:06:06.041266  224989 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> 42022.pem in /etc/ssl/certs
	I1228 07:06:06.041277  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /etc/ssl/certs/42022.pem
	I1228 07:06:06.041379  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:06:06.049144  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:06.067741  224989 start.go:296] duration metric: took 149.503869ms for postStartSetup
	I1228 07:06:06.068147  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:06.088885  224989 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/config.json ...
	I1228 07:06:06.089181  224989 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:06:06.089238  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.105851  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.201431  224989 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:06:06.206103  224989 start.go:128] duration metric: took 11.780778105s to createHost
	I1228 07:06:06.206132  224989 start.go:83] releasing machines lock for "force-systemd-env-475689", held for 11.780905532s
	I1228 07:06:06.206206  224989 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-475689
	I1228 07:06:06.222595  224989 ssh_runner.go:195] Run: cat /version.json
	I1228 07:06:06.222646  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.222662  224989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:06:06.222723  224989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-475689
	I1228 07:06:06.245805  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.246876  224989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-env-475689/id_rsa Username:docker}
	I1228 07:06:06.434510  224989 ssh_runner.go:195] Run: systemctl --version
	I1228 07:06:06.444141  224989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:06:06.448686  224989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:06:06.448805  224989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:06:06.476276  224989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:06:06.476303  224989 start.go:496] detecting cgroup driver to use...
	I1228 07:06:06.476322  224989 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:06.476424  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:06.491123  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:06:06.500320  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:06:06.509716  224989 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:06:06.509837  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:06:06.520435  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:06.534403  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:06:06.543771  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:06.552900  224989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:06:06.561817  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:06:06.570747  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:06:06.579476  224989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:06:06.588731  224989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:06:06.596447  224989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:06:06.604065  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:06.725227  224989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:06:06.834233  224989 start.go:496] detecting cgroup driver to use...
	I1228 07:06:06.834267  224989 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:06.834323  224989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:06:06.877179  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:06.898341  224989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:06:06.932391  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:06.952923  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:06:06.970384  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:06.995883  224989 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:06:07.001457  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:06:07.018758  224989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:06:07.039835  224989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:06:07.192544  224989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:06:07.338234  224989 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:06:07.338335  224989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:06:07.356139  224989 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:06:07.371077  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:07.508715  224989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:06:07.920426  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:06:07.935127  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:06:07.950017  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:07.964123  224989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:06:08.088410  224989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:06:08.210129  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.328970  224989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:06:08.345517  224989 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:06:08.358574  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.482416  224989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:06:08.552004  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:08.567619  224989 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:06:08.567689  224989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:06:08.571941  224989 start.go:574] Will wait 60s for crictl version
	I1228 07:06:08.572000  224989 ssh_runner.go:195] Run: which crictl
	I1228 07:06:08.575839  224989 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:06:08.600600  224989 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:06:08.600670  224989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:08.622422  224989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:08.650037  224989 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:06:08.650144  224989 cli_runner.go:164] Run: docker network inspect force-systemd-env-475689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:08.666832  224989 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1228 07:06:08.671032  224989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:08.681539  224989 kubeadm.go:884] updating cluster {Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:06:08.681653  224989 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:08.681711  224989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:08.700176  224989 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:08.700227  224989 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:06:08.700292  224989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:08.718572  224989 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:08.718601  224989 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:06:08.718612  224989 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1228 07:06:08.718708  224989 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-475689 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:06:08.718787  224989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:06:08.770196  224989 cni.go:84] Creating CNI manager for ""
	I1228 07:06:08.770222  224989 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:08.770242  224989 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:06:08.770267  224989 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-475689 NodeName:force-systemd-env-475689 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:06:08.770396  224989 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-475689"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:06:08.770467  224989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:06:08.778485  224989 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:06:08.778563  224989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:06:08.786300  224989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1228 07:06:08.799271  224989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:06:08.813014  224989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1228 07:06:08.826208  224989 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:06:08.830159  224989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:08.840358  224989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:08.984246  224989 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:06:09.020994  224989 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689 for IP: 192.168.85.2
	I1228 07:06:09.021013  224989 certs.go:195] generating shared ca certs ...
	I1228 07:06:09.021029  224989 certs.go:227] acquiring lock for ca certs: {Name:mkb08779780dcf6b96f2c93a4ec9c28968a3dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.021172  224989 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key
	I1228 07:06:09.021215  224989 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key
	I1228 07:06:09.021222  224989 certs.go:257] generating profile certs ...
	I1228 07:06:09.021277  224989 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key
	I1228 07:06:09.021296  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt with IP's: []
	I1228 07:06:05.050915  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Running}}
	I1228 07:06:05.083775  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.121760  226337 cli_runner.go:164] Run: docker exec force-systemd-flag-649810 stat /var/lib/dpkg/alternatives/iptables
	I1228 07:06:05.195792  226337 oci.go:144] the created container "force-systemd-flag-649810" has a running status.
	I1228 07:06:05.195838  226337 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa...
	I1228 07:06:05.653918  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1228 07:06:05.653967  226337 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1228 07:06:05.681329  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.716468  226337 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1228 07:06:05.716494  226337 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-649810 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1228 07:06:05.794934  226337 cli_runner.go:164] Run: docker container inspect force-systemd-flag-649810 --format={{.State.Status}}
	I1228 07:06:05.820643  226337 machine.go:94] provisionDockerMachine start ...
	I1228 07:06:05.820722  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:05.844435  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:05.845616  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:05.845648  226337 main.go:144] libmachine: About to run SSH command:
	hostname
	I1228 07:06:05.846196  226337 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38420->127.0.0.1:32999: read: connection reset by peer
	I1228 07:06:08.999828  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
	
	I1228 07:06:08.999858  226337 ubuntu.go:182] provisioning hostname "force-systemd-flag-649810"
	I1228 07:06:08.999919  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.037969  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:09.038372  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:09.038392  226337 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-649810 && echo "force-systemd-flag-649810" | sudo tee /etc/hostname
	I1228 07:06:09.203095  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-649810
	
	I1228 07:06:09.203197  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.226561  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:09.226886  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:09.226912  226337 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-649810' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-649810/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-649810' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1228 07:06:09.376692  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1228 07:06:09.376727  226337 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22352-2382/.minikube CaCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22352-2382/.minikube}
	I1228 07:06:09.376753  226337 ubuntu.go:190] setting up certificates
	I1228 07:06:09.376763  226337 provision.go:84] configureAuth start
	I1228 07:06:09.376841  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:09.402227  226337 provision.go:143] copyHostCerts
	I1228 07:06:09.402278  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:09.402318  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem, removing ...
	I1228 07:06:09.402325  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem
	I1228 07:06:09.402409  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/ca.pem (1082 bytes)
	I1228 07:06:09.402515  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:09.402540  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem, removing ...
	I1228 07:06:09.402545  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem
	I1228 07:06:09.402581  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/cert.pem (1123 bytes)
	I1228 07:06:09.402643  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:09.402664  226337 exec_runner.go:144] found /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem, removing ...
	I1228 07:06:09.402677  226337 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem
	I1228 07:06:09.402711  226337 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22352-2382/.minikube/key.pem (1675 bytes)
	I1228 07:06:09.402788  226337 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-649810 san=[127.0.0.1 192.168.76.2 force-systemd-flag-649810 localhost minikube]
	I1228 07:06:09.752728  226337 provision.go:177] copyRemoteCerts
	I1228 07:06:09.752940  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1228 07:06:09.753068  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:09.785834  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:09.313580  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt ...
	I1228 07:06:09.313614  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.crt: {Name:mk226378e6b56b52aadfd2ded9c681fe9c5660f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.313849  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key ...
	I1228 07:06:09.313868  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/client.key: {Name:mkcc683b6246ddbcb384c8b2698e7b4e6bc914ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.314021  224989 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178
	I1228 07:06:09.314041  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1228 07:06:09.463095  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 ...
	I1228 07:06:09.463128  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178: {Name:mkfde4390a73a278f9a7c15f9f5fdc700e21d8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.463358  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178 ...
	I1228 07:06:09.463373  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178: {Name:mkecaae751c91f71a1173b180132498bec7d17df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:09.463474  224989 certs.go:382] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt.b6f59178 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt
	I1228 07:06:09.463557  224989 certs.go:386] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key.b6f59178 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key
	I1228 07:06:09.463618  224989 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key
	I1228 07:06:09.463637  224989 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt with IP's: []
	I1228 07:06:10.009976  224989 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt ...
	I1228 07:06:10.010371  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt: {Name:mk292857f257843736c8a536a36ea0671e82d753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:10.010674  224989 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key ...
	I1228 07:06:10.010718  224989 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key: {Name:mkc34bf3df42643e166dafc6a7c7b665e63ec741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:10.010906  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:06:10.010964  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:06:10.010998  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:06:10.011046  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:06:10.011084  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:06:10.011118  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:06:10.011165  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:06:10.011206  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:06:10.011314  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem (1338 bytes)
	W1228 07:06:10.011379  224989 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202_empty.pem, impossibly tiny 0 bytes
	I1228 07:06:10.011419  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem (1679 bytes)
	I1228 07:06:10.011470  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:06:10.011527  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:06:10.011576  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem (1675 bytes)
	I1228 07:06:10.011669  224989 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:10.011728  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.011775  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem -> /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.011810  224989 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.012461  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:06:10.041564  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1228 07:06:10.069211  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:06:10.099395  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:06:10.125275  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:06:10.145764  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:06:10.166815  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:06:10.188644  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-env-475689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:06:10.210572  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:06:10.232377  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem --> /usr/share/ca-certificates/4202.pem (1338 bytes)
	I1228 07:06:10.258152  224989 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /usr/share/ca-certificates/42022.pem (1708 bytes)
	I1228 07:06:10.278721  224989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:06:10.294217  224989 ssh_runner.go:195] Run: openssl version
	I1228 07:06:10.301133  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.308978  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:06:10.316640  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.320690  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.320759  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:10.364308  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:06:10.377244  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:06:10.387661  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.397975  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4202.pem /etc/ssl/certs/4202.pem
	I1228 07:06:10.406609  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.411012  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.411075  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4202.pem
	I1228 07:06:10.460006  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:06:10.469064  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4202.pem /etc/ssl/certs/51391683.0
	I1228 07:06:10.479105  224989 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.487833  224989 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42022.pem /etc/ssl/certs/42022.pem
	I1228 07:06:10.498878  224989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.504606  224989 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.504673  224989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42022.pem
	I1228 07:06:10.567710  224989 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:10.577669  224989 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42022.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:10.585672  224989 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:06:10.590990  224989 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:06:10.591091  224989 kubeadm.go:401] StartCluster: {Name:force-systemd-env-475689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-475689 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:10.591254  224989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:06:10.620859  224989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:06:10.633799  224989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:06:10.644816  224989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:06:10.644931  224989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:06:10.657320  224989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:06:10.657399  224989 kubeadm.go:158] found existing configuration files:
	
	I1228 07:06:10.657484  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:06:10.667230  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:06:10.667344  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:06:10.676219  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:06:10.686094  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:06:10.686210  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:06:10.694628  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:06:10.703822  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:06:10.703940  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:06:10.712262  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:06:10.721927  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:06:10.722063  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:06:10.731589  224989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:06:10.772893  224989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:06:10.773092  224989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:06:10.884593  224989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:06:10.884763  224989 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:06:10.884831  224989 kubeadm.go:319] OS: Linux
	I1228 07:06:10.884886  224989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:06:10.884938  224989 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:06:10.884990  224989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:06:10.885041  224989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:06:10.885093  224989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:06:10.885144  224989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:06:10.885194  224989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:06:10.885246  224989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:06:10.885296  224989 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:06:10.981004  224989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:06:10.981127  224989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:06:10.981225  224989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:06:10.996930  224989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:06:11.003793  224989 out.go:252]   - Generating certificates and keys ...
	I1228 07:06:11.003980  224989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:06:11.004103  224989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:06:11.101359  224989 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:06:11.625276  224989 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:06:12.079231  224989 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:06:12.156403  224989 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:06:12.372673  224989 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:06:12.372817  224989 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:06:12.763495  224989 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:06:12.763790  224989 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1228 07:06:13.220874  224989 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:06:13.320720  224989 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:06:13.640940  224989 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:06:13.641017  224989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:06:13.820509  224989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:06:14.044125  224989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:06:09.921106  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1228 07:06:09.921170  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1228 07:06:09.942067  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1228 07:06:09.942130  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1228 07:06:09.962397  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1228 07:06:09.962470  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1228 07:06:09.983245  226337 provision.go:87] duration metric: took 606.461413ms to configureAuth
	I1228 07:06:09.983284  226337 ubuntu.go:206] setting minikube options for container-runtime
	I1228 07:06:09.983486  226337 config.go:182] Loaded profile config "force-systemd-flag-649810": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:06:09.983556  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.018234  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.018571  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.018580  226337 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1228 07:06:10.171532  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1228 07:06:10.171556  226337 ubuntu.go:71] root file system type: overlay
	I1228 07:06:10.171677  226337 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1228 07:06:10.171764  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.196947  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.197266  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.197352  226337 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1228 07:06:10.359758  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1228 07:06:10.359847  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:10.387300  226337 main.go:144] libmachine: Using SSH client type: native
	I1228 07:06:10.387769  226337 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I1228 07:06:10.387790  226337 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1228 07:06:11.609721  226337 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-28 07:06:10.353229981 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1228 07:06:11.609760  226337 machine.go:97] duration metric: took 5.78909378s to provisionDockerMachine
	I1228 07:06:11.609782  226337 client.go:176] duration metric: took 11.182753053s to LocalClient.Create
	I1228 07:06:11.609802  226337 start.go:167] duration metric: took 11.182887652s to libmachine.API.Create "force-systemd-flag-649810"
	I1228 07:06:11.609811  226337 start.go:293] postStartSetup for "force-systemd-flag-649810" (driver="docker")
	I1228 07:06:11.609821  226337 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1228 07:06:11.609893  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1228 07:06:11.609934  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.637109  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:11.737067  226337 ssh_runner.go:195] Run: cat /etc/os-release
	I1228 07:06:11.740612  226337 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1228 07:06:11.740643  226337 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1228 07:06:11.740655  226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/addons for local assets ...
	I1228 07:06:11.740714  226337 filesync.go:126] Scanning /home/jenkins/minikube-integration/22352-2382/.minikube/files for local assets ...
	I1228 07:06:11.740797  226337 filesync.go:149] local asset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> 42022.pem in /etc/ssl/certs
	I1228 07:06:11.740808  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /etc/ssl/certs/42022.pem
	I1228 07:06:11.740908  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1228 07:06:11.750293  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:11.773152  226337 start.go:296] duration metric: took 163.328024ms for postStartSetup
	I1228 07:06:11.773520  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:11.810119  226337 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/config.json ...
	I1228 07:06:11.810475  226337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 07:06:11.810541  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.832046  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:11.938437  226337 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1228 07:06:11.944396  226337 start.go:128] duration metric: took 11.525566626s to createHost
	I1228 07:06:11.944421  226337 start.go:83] releasing machines lock for "force-systemd-flag-649810", held for 11.52572031s
	I1228 07:06:11.944491  226337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-649810
	I1228 07:06:11.969414  226337 ssh_runner.go:195] Run: cat /version.json
	I1228 07:06:11.969478  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:11.969777  226337 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1228 07:06:11.969830  226337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-649810
	I1228 07:06:12.000799  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:12.017933  226337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/force-systemd-flag-649810/id_rsa Username:docker}
	I1228 07:06:12.132244  226337 ssh_runner.go:195] Run: systemctl --version
	I1228 07:06:12.231712  226337 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1228 07:06:12.236758  226337 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1228 07:06:12.236869  226337 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1228 07:06:12.267687  226337 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1228 07:06:12.267755  226337 start.go:496] detecting cgroup driver to use...
	I1228 07:06:12.267782  226337 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:12.267953  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:12.283188  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1228 07:06:12.293095  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1228 07:06:12.304428  226337 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1228 07:06:12.304533  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1228 07:06:12.313854  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:12.323205  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1228 07:06:12.332643  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1228 07:06:12.341934  226337 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1228 07:06:12.350791  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1228 07:06:12.360609  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1228 07:06:12.369833  226337 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1228 07:06:12.379802  226337 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1228 07:06:12.388095  226337 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1228 07:06:12.396058  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:12.536042  226337 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1228 07:06:12.674161  226337 start.go:496] detecting cgroup driver to use...
	I1228 07:06:12.674237  226337 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1228 07:06:12.674325  226337 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1228 07:06:12.699050  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:12.712858  226337 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1228 07:06:12.751092  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1228 07:06:12.769844  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1228 07:06:12.792531  226337 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1228 07:06:12.815446  226337 ssh_runner.go:195] Run: which cri-dockerd
	I1228 07:06:12.819518  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1228 07:06:12.829311  226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1228 07:06:12.845032  226337 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1228 07:06:12.989013  226337 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1228 07:06:13.140533  226337 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1228 07:06:13.140637  226337 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1228 07:06:13.157163  226337 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1228 07:06:13.171809  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:13.314373  226337 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1228 07:06:13.798806  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1228 07:06:13.813917  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1228 07:06:13.829978  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:13.845472  226337 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1228 07:06:13.990757  226337 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1228 07:06:14.139338  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:14.291076  226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1228 07:06:14.316287  226337 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1228 07:06:14.331768  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:14.475844  226337 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1228 07:06:14.562973  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1228 07:06:14.582946  226337 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1228 07:06:14.583063  226337 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1228 07:06:14.587245  226337 start.go:574] Will wait 60s for crictl version
	I1228 07:06:14.587308  226337 ssh_runner.go:195] Run: which crictl
	I1228 07:06:14.591035  226337 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1228 07:06:14.619000  226337 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1228 07:06:14.619117  226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:14.654802  226337 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1228 07:06:14.681742  226337 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1228 07:06:14.681898  226337 cli_runner.go:164] Run: docker network inspect force-systemd-flag-649810 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1228 07:06:14.699922  226337 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1228 07:06:14.704031  226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:14.718757  226337 kubeadm.go:884] updating cluster {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1228 07:06:14.718869  226337 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1228 07:06:14.718923  226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:14.738067  226337 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:14.738094  226337 docker.go:624] Images already preloaded, skipping extraction
	I1228 07:06:14.738159  226337 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1228 07:06:14.765792  226337 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1228 07:06:14.765815  226337 cache_images.go:86] Images are preloaded, skipping loading
	I1228 07:06:14.765825  226337 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1228 07:06:14.765924  226337 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-649810 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1228 07:06:14.766001  226337 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1228 07:06:14.828478  226337 cni.go:84] Creating CNI manager for ""
	I1228 07:06:14.828557  226337 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 07:06:14.828591  226337 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1228 07:06:14.828637  226337 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-649810 NodeName:force-systemd-flag-649810 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1228 07:06:14.828791  226337 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-649810"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1228 07:06:14.828879  226337 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1228 07:06:14.837993  226337 binaries.go:51] Found k8s binaries, skipping transfer
	I1228 07:06:14.838058  226337 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1228 07:06:14.846918  226337 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1228 07:06:14.862481  226337 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1228 07:06:14.877609  226337 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1228 07:06:14.893026  226337 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1228 07:06:14.897112  226337 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1228 07:06:14.908147  226337 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1228 07:06:14.785380  224989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:06:15.040596  224989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:06:15.253841  224989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:06:15.253946  224989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:06:15.254018  224989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:06:15.112938  226337 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1228 07:06:15.149385  226337 certs.go:69] Setting up /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810 for IP: 192.168.76.2
	I1228 07:06:15.149408  226337 certs.go:195] generating shared ca certs ...
	I1228 07:06:15.149425  226337 certs.go:227] acquiring lock for ca certs: {Name:mkb08779780dcf6b96f2c93a4ec9c28968a3dff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.149572  226337 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key
	I1228 07:06:15.149628  226337 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key
	I1228 07:06:15.149636  226337 certs.go:257] generating profile certs ...
	I1228 07:06:15.149691  226337 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key
	I1228 07:06:15.149702  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt with IP's: []
	I1228 07:06:15.327648  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt ...
	I1228 07:06:15.327721  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.crt: {Name:mkf75acb8f7153fe0d0255b564acb6149af2fb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.327938  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key ...
	I1228 07:06:15.327982  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/client.key: {Name:mk51f561ed38ca116434114e1f62874070b9255b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.328119  226337 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1
	I1228 07:06:15.328164  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1228 07:06:15.764980  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 ...
	I1228 07:06:15.765013  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1: {Name:mk91fced1432c5d7a2938e5f8f1f25ea86d8f5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.765212  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 ...
	I1228 07:06:15.765227  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1: {Name:mk1a153a4b0a803bdf2ccf3b1ffb3b75a611c21d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:15.765314  226337 certs.go:382] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt
	I1228 07:06:15.765393  226337 certs.go:386] copying /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key.aa9e84f1 -> /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key
	I1228 07:06:15.765455  226337 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key
	I1228 07:06:15.765467  226337 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt with IP's: []
	I1228 07:06:16.054118  226337 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt ...
	I1228 07:06:16.054154  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt: {Name:mk48c2c2ab804522bc505c3ba557fdae87d36100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:16.054331  226337 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key ...
	I1228 07:06:16.054347  226337 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key: {Name:mk69a54a58808c1b19f454fc1eed5065bebd15fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 07:06:16.054418  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1228 07:06:16.054445  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1228 07:06:16.054466  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1228 07:06:16.054482  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1228 07:06:16.054500  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1228 07:06:16.054517  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1228 07:06:16.054529  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1228 07:06:16.054543  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1228 07:06:16.054593  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem (1338 bytes)
	W1228 07:06:16.054636  226337 certs.go:480] ignoring /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202_empty.pem, impossibly tiny 0 bytes
	I1228 07:06:16.054649  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca-key.pem (1679 bytes)
	I1228 07:06:16.054677  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/ca.pem (1082 bytes)
	I1228 07:06:16.054705  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/cert.pem (1123 bytes)
	I1228 07:06:16.054746  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/key.pem (1675 bytes)
	I1228 07:06:16.054797  226337 certs.go:484] found cert: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem (1708 bytes)
	I1228 07:06:16.054833  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.054850  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem -> /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.054861  226337 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem -> /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.055446  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1228 07:06:16.078627  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1228 07:06:16.098321  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1228 07:06:16.116670  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1228 07:06:16.134710  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1228 07:06:16.152509  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1228 07:06:16.170879  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1228 07:06:16.188865  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/force-systemd-flag-649810/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1228 07:06:16.206838  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1228 07:06:16.226258  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/certs/4202.pem --> /usr/share/ca-certificates/4202.pem (1338 bytes)
	I1228 07:06:16.245780  226337 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/ssl/certs/42022.pem --> /usr/share/ca-certificates/42022.pem (1708 bytes)
	I1228 07:06:16.268659  226337 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1228 07:06:16.283601  226337 ssh_runner.go:195] Run: openssl version
	I1228 07:06:16.290585  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.299195  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1228 07:06:16.307118  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.310841  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 28 06:28 /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.310916  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1228 07:06:16.352064  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1228 07:06:16.359487  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1228 07:06:16.366859  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.374261  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4202.pem /etc/ssl/certs/4202.pem
	I1228 07:06:16.381698  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.385388  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 28 06:34 /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.385461  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4202.pem
	I1228 07:06:16.426915  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1228 07:06:16.434366  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4202.pem /etc/ssl/certs/51391683.0
	I1228 07:06:16.441642  226337 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.449184  226337 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/42022.pem /etc/ssl/certs/42022.pem
	I1228 07:06:16.456957  226337 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.460669  226337 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 28 06:34 /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.460736  226337 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42022.pem
	I1228 07:06:16.501782  226337 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:16.509722  226337 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/42022.pem /etc/ssl/certs/3ec20f2e.0
	I1228 07:06:16.517199  226337 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1228 07:06:16.520925  226337 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1228 07:06:16.520999  226337 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-649810 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-649810 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 07:06:16.521142  226337 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1228 07:06:16.537329  226337 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1228 07:06:16.545115  226337 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1228 07:06:16.552764  226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:06:16.552877  226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:06:16.560792  226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:06:16.560864  226337 kubeadm.go:158] found existing configuration files:
	
	I1228 07:06:16.560941  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:06:16.568352  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:06:16.568441  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:06:16.575993  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:06:16.583437  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:06:16.583546  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:06:16.590681  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:06:16.598093  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:06:16.598202  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:06:16.605574  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:06:16.613280  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:06:16.613396  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:06:16.620636  226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:06:16.661468  226337 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:06:16.661712  226337 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:06:16.758679  226337 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:06:16.758779  226337 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:06:16.758849  226337 kubeadm.go:319] OS: Linux
	I1228 07:06:16.758929  226337 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:06:16.759009  226337 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:06:16.759089  226337 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:06:16.759163  226337 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:06:16.759245  226337 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:06:16.759325  226337 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:06:16.759389  226337 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:06:16.759482  226337 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:06:16.759553  226337 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:06:16.835436  226337 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:06:16.835601  226337 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:06:16.835720  226337 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:06:16.852604  226337 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:06:15.257399  224989 out.go:252]   - Booting up control plane ...
	I1228 07:06:15.257504  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:06:15.257581  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:06:15.257650  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:06:15.284790  224989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:06:15.284950  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:06:15.294293  224989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:06:15.298846  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:06:15.298926  224989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:06:15.485640  224989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:06:15.485763  224989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:06:16.858977  226337 out.go:252]   - Generating certificates and keys ...
	I1228 07:06:16.859071  226337 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:06:16.859148  226337 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:06:16.922161  226337 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1228 07:06:17.011768  226337 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1228 07:06:17.090969  226337 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1228 07:06:17.253680  226337 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1228 07:06:17.439963  226337 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1228 07:06:17.440300  226337 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:06:17.731890  226337 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1228 07:06:17.732248  226337 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1228 07:06:18.395961  226337 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1228 07:06:18.651951  226337 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1228 07:06:18.929995  226337 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1228 07:06:18.930273  226337 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:06:19.098124  226337 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:06:19.475849  226337 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:06:19.685709  226337 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:06:20.030601  226337 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:06:20.108979  226337 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:06:20.109747  226337 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:06:20.112581  226337 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:06:20.115925  226337 out.go:252]   - Booting up control plane ...
	I1228 07:06:20.116029  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:06:20.116107  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:06:20.116173  226337 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:06:20.131690  226337 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:06:20.131807  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:06:20.142201  226337 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:06:20.142570  226337 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:06:20.142817  226337 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:06:20.280684  226337 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:06:20.280818  226337 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:10:15.486766  224989 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001236445s
	I1228 07:10:15.486809  224989 kubeadm.go:319] 
	I1228 07:10:15.486868  224989 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:10:15.486901  224989 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:10:15.487013  224989 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:10:15.487023  224989 kubeadm.go:319] 
	I1228 07:10:15.487136  224989 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:10:15.487171  224989 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:10:15.487204  224989 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:10:15.487209  224989 kubeadm.go:319] 
	I1228 07:10:15.491521  224989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:15.491947  224989 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:15.492061  224989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:10:15.492317  224989 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:10:15.492330  224989 kubeadm.go:319] 
	I1228 07:10:15.492406  224989 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:10:15.492558  224989 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-475689 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001236445s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:10:15.492645  224989 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:10:15.929418  224989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:10:15.943183  224989 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:10:15.943253  224989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:10:15.951638  224989 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:10:15.951662  224989 kubeadm.go:158] found existing configuration files:
	
	I1228 07:10:15.951723  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:10:15.959911  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:10:15.959978  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:10:15.967770  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:10:15.976086  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:10:15.976157  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:10:15.984175  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:10:15.992276  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:10:15.992347  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:10:16.001958  224989 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:10:16.013995  224989 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:10:16.014062  224989 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:10:16.023133  224989 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:10:16.074115  224989 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1228 07:10:16.074339  224989 kubeadm.go:319] [preflight] Running pre-flight checks
	I1228 07:10:16.150923  224989 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1228 07:10:16.150997  224989 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1228 07:10:16.151033  224989 kubeadm.go:319] OS: Linux
	I1228 07:10:16.151080  224989 kubeadm.go:319] CGROUPS_CPU: enabled
	I1228 07:10:16.151128  224989 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1228 07:10:16.151176  224989 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1228 07:10:16.151224  224989 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1228 07:10:16.151272  224989 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1228 07:10:16.151320  224989 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1228 07:10:16.151367  224989 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1228 07:10:16.151416  224989 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1228 07:10:16.151462  224989 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1228 07:10:16.220799  224989 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1228 07:10:16.220924  224989 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1228 07:10:16.221027  224989 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1228 07:10:16.234952  224989 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1228 07:10:16.240621  224989 out.go:252]   - Generating certificates and keys ...
	I1228 07:10:16.240734  224989 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1228 07:10:16.240813  224989 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1228 07:10:16.240895  224989 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1228 07:10:16.240965  224989 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1228 07:10:16.241080  224989 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1228 07:10:16.241179  224989 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1228 07:10:16.241283  224989 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1228 07:10:16.241398  224989 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1228 07:10:16.241511  224989 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1228 07:10:16.241637  224989 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1228 07:10:16.241715  224989 kubeadm.go:319] [certs] Using the existing "sa" key
	I1228 07:10:16.241807  224989 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1228 07:10:16.339903  224989 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1228 07:10:16.724907  224989 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1228 07:10:16.794526  224989 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1228 07:10:16.947622  224989 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1228 07:10:17.124090  224989 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1228 07:10:17.124968  224989 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1228 07:10:17.127679  224989 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1228 07:10:17.133057  224989 out.go:252]   - Booting up control plane ...
	I1228 07:10:17.133167  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1228 07:10:17.133245  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1228 07:10:17.133313  224989 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1228 07:10:17.151708  224989 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1228 07:10:17.151859  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1228 07:10:17.160427  224989 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1228 07:10:17.161351  224989 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1228 07:10:17.161581  224989 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1228 07:10:17.294931  224989 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1228 07:10:17.295052  224989 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1228 07:10:20.275741  226337 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001182866s
	I1228 07:10:20.275772  226337 kubeadm.go:319] 
	I1228 07:10:20.275828  226337 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:10:20.275861  226337 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:10:20.275960  226337 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:10:20.275966  226337 kubeadm.go:319] 
	I1228 07:10:20.276064  226337 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:10:20.276095  226337 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:10:20.276124  226337 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:10:20.276128  226337 kubeadm.go:319] 
	I1228 07:10:20.279666  226337 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:20.280132  226337 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:20.280283  226337 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:10:20.280566  226337 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:10:20.280582  226337 kubeadm.go:319] 
	I1228 07:10:20.280656  226337 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1228 07:10:20.280798  226337 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-649810 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001182866s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1228 07:10:20.280887  226337 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1228 07:10:20.708651  226337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 07:10:20.722039  226337 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1228 07:10:20.722109  226337 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1228 07:10:20.730359  226337 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1228 07:10:20.730423  226337 kubeadm.go:158] found existing configuration files:
	
	I1228 07:10:20.730491  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1228 07:10:20.738525  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1228 07:10:20.738593  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1228 07:10:20.746327  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1228 07:10:20.754111  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1228 07:10:20.754179  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1228 07:10:20.761709  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1228 07:10:20.769442  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1228 07:10:20.769505  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1228 07:10:20.777179  226337 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1228 07:10:20.785378  226337 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1228 07:10:20.785469  226337 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1228 07:10:20.793339  226337 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1228 07:10:20.906011  226337 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:10:20.906414  226337 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:10:20.974641  226337 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:14:17.295571  224989 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001050371s
	I1228 07:14:17.295599  224989 kubeadm.go:319] 
	I1228 07:14:17.295657  224989 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1228 07:14:17.295694  224989 kubeadm.go:319] 	- The kubelet is not running
	I1228 07:14:17.295833  224989 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1228 07:14:17.295849  224989 kubeadm.go:319] 
	I1228 07:14:17.295955  224989 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1228 07:14:17.295988  224989 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1228 07:14:17.296019  224989 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1228 07:14:17.296023  224989 kubeadm.go:319] 
	I1228 07:14:17.299652  224989 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1228 07:14:17.300134  224989 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1228 07:14:17.300273  224989 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1228 07:14:17.300566  224989 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1228 07:14:17.300581  224989 kubeadm.go:319] 
	I1228 07:14:17.300718  224989 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1228 07:14:17.300738  224989 kubeadm.go:403] duration metric: took 8m6.709649294s to StartCluster
	I1228 07:14:17.300826  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.313111  224989 logs.go:279] Failed to list containers for "kube-apiserver": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.313186  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.324379  224989 logs.go:279] Failed to list containers for "etcd": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.324445  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.335616  224989 logs.go:279] Failed to list containers for "coredns": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.335682  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.346699  224989 logs.go:279] Failed to list containers for "kube-scheduler": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.346770  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.357801  224989 logs.go:279] Failed to list containers for "kube-proxy": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.357874  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.368852  224989 logs.go:279] Failed to list containers for "kube-controller-manager": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.368918  224989 ssh_runner.go:195] Run: sudo runc list -f json
	E1228 07:14:17.380328  224989 logs.go:279] Failed to list containers for "kindnet": runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-28T07:14:17Z" level=error msg="open /run/runc: no such file or directory"
	I1228 07:14:17.380354  224989 logs.go:123] Gathering logs for kubelet ...
	I1228 07:14:17.380365  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1228 07:14:17.438037  224989 logs.go:123] Gathering logs for dmesg ...
	I1228 07:14:17.438073  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1228 07:14:17.453671  224989 logs.go:123] Gathering logs for describe nodes ...
	I1228 07:14:17.453699  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1228 07:14:17.528877  224989 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:17.519796    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.520230    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522445    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522831    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.524536    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1228 07:14:17.519796    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.520230    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522445    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.522831    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:17.524536    5494 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1228 07:14:17.528911  224989 logs.go:123] Gathering logs for Docker ...
	I1228 07:14:17.528924  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1228 07:14:17.554197  224989 logs.go:123] Gathering logs for container status ...
	I1228 07:14:17.554232  224989 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1228 07:14:17.598393  224989 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1228 07:14:17.598446  224989 out.go:285] * 
	W1228 07:14:17.598529  224989 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:17.598551  224989 out.go:285] * 
	W1228 07:14:17.598802  224989 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1228 07:14:17.605739  224989 out.go:203] 
	W1228 07:14:17.608493  224989 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001050371s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1228 07:14:17.608549  224989 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1228 07:14:17.608573  224989 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1228 07:14:17.611807  224989 out.go:203] 
	
	
	==> Docker <==
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.675301348Z" level=info msg="Restoring containers: start."
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.692710164Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.708619415Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.880886265Z" level=info msg="Loading containers: done."
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.892026624Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.892081960Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.892121279Z" level=info msg="Initializing buildkit"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.909367951Z" level=info msg="Completed buildkit initialization"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.917884209Z" level=info msg="Daemon has completed initialization"
	Dec 28 07:06:07 force-systemd-env-475689 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.920435390Z" level=info msg="API listen on /run/docker.sock"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.920699147Z" level=info msg="API listen on [::]:2376"
	Dec 28 07:06:07 force-systemd-env-475689 dockerd[1145]: time="2025-12-28T07:06:07.920789872Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 28 07:06:08 force-systemd-env-475689 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Start docker client with request timeout 0s"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Loaded network plugin cni"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Setting cgroupDriver systemd"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 28 07:06:08 force-systemd-env-475689 cri-dockerd[1427]: time="2025-12-28T07:06:08Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 28 07:06:08 force-systemd-env-475689 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1228 07:14:18.966858    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:18.967634    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:18.969245    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:18.969557    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1228 07:14:18.971036    5644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec28 06:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015148] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500432] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034760] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.784008] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.137634] kauditd_printk_skb: 36 callbacks suppressed
	[Dec28 06:42] hrtimer: interrupt took 11242004 ns
	
	
	==> kernel <==
	 07:14:19 up 56 min,  0 user,  load average: 0.51, 0.91, 1.82
	Linux force-systemd-env-475689 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 28 07:14:15 force-systemd-env-475689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:16 force-systemd-env-475689 kubelet[5433]: E1228 07:14:16.052516    5433 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:16 force-systemd-env-475689 kubelet[5438]: E1228 07:14:16.793947    5438 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:16 force-systemd-env-475689 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:17 force-systemd-env-475689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 28 07:14:17 force-systemd-env-475689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:17 force-systemd-env-475689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:17 force-systemd-env-475689 kubelet[5499]: E1228 07:14:17.558863    5499 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:17 force-systemd-env-475689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:17 force-systemd-env-475689 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:18 force-systemd-env-475689 kubelet[5556]: E1228 07:14:18.307614    5556 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 28 07:14:18 force-systemd-env-475689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-475689 -n force-systemd-env-475689
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-475689 -n force-systemd-env-475689: exit status 6 (432.040453ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:14:19.594595  238323 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-475689" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-475689" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-475689" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-475689
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-475689: (1.821626855s)
--- FAIL: TestForceSystemdEnv (507.30s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.67
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.17
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
22 TestOffline 77.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 138.31
29 TestAddons/serial/Volcano 42.55
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 10.94
35 TestAddons/parallel/Registry 18.03
36 TestAddons/parallel/RegistryCreds 0.72
37 TestAddons/parallel/Ingress 21.01
38 TestAddons/parallel/InspektorGadget 10.78
39 TestAddons/parallel/MetricsServer 6.78
41 TestAddons/parallel/CSI 62.89
42 TestAddons/parallel/Headlamp 17.88
43 TestAddons/parallel/CloudSpanner 5.66
44 TestAddons/parallel/LocalPath 10.7
45 TestAddons/parallel/NvidiaDevicePlugin 5.7
46 TestAddons/parallel/Yakd 10.87
48 TestAddons/StoppedEnableDisable 11.38
49 TestCertOptions 34.8
50 TestCertExpiration 244.88
51 TestDockerFlags 36.71
58 TestErrorSpam/setup 29.76
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.06
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 11.26
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 70.27
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.88
70 TestFunctional/serial/KubeContext 0.1
71 TestFunctional/serial/KubectlGetPods 0.16
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
75 TestFunctional/serial/CacheCmd/cache/add_local 0.92
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 42.2
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.23
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.81
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 7.79
91 TestFunctional/parallel/DryRun 0.48
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.18
97 TestFunctional/parallel/ServiceCmdConnect 8.59
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 19.82
101 TestFunctional/parallel/SSHCmd 0.81
102 TestFunctional/parallel/CpCmd 1.67
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.68
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
116 TestFunctional/parallel/Version/short 0.08
117 TestFunctional/parallel/Version/components 1.18
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.57
123 TestFunctional/parallel/ImageCommands/Setup 0.69
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.41
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
134 TestFunctional/parallel/DockerEnv/bash 1.04
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.23
144 TestFunctional/parallel/MountCmd/any-port 8.07
145 TestFunctional/parallel/MountCmd/specific-port 1.89
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
147 TestFunctional/parallel/ServiceCmd/DeployApp 8.3
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
149 TestFunctional/parallel/ProfileCmd/profile_list 0.49
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
151 TestFunctional/parallel/ServiceCmd/List 1.4
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.58
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.41
155 TestFunctional/parallel/ServiceCmd/URL 0.41
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 186.76
164 TestMultiControlPlane/serial/DeployApp 7.43
165 TestMultiControlPlane/serial/PingHostFromPods 1.72
166 TestMultiControlPlane/serial/AddWorkerNode 36.33
167 TestMultiControlPlane/serial/NodeLabels 0.12
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
169 TestMultiControlPlane/serial/CopyFile 20.28
170 TestMultiControlPlane/serial/StopSecondaryNode 12.36
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
172 TestMultiControlPlane/serial/RestartSecondaryNode 48.62
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 165.47
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.77
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
177 TestMultiControlPlane/serial/StopCluster 33.13
178 TestMultiControlPlane/serial/RestartCluster 66.68
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.89
180 TestMultiControlPlane/serial/AddSecondaryNode 55.25
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.21
184 TestImageBuild/serial/Setup 29.44
185 TestImageBuild/serial/NormalBuild 1.54
186 TestImageBuild/serial/BuildWithBuildArg 1
187 TestImageBuild/serial/BuildWithDockerIgnore 0.77
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.97
193 TestJSONOutput/start/Command 68.8
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.69
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.55
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 11.3
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.23
218 TestKicCustomNetwork/create_custom_network 30.87
219 TestKicCustomNetwork/use_default_bridge_network 31.63
220 TestKicExistingNetwork 30.34
221 TestKicCustomSubnet 30.42
222 TestKicStaticIP 31.36
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 63.46
227 TestMountStart/serial/StartWithMountFirst 9.94
228 TestMountStart/serial/VerifyMountFirst 0.26
229 TestMountStart/serial/StartWithMountSecond 10.4
230 TestMountStart/serial/VerifyMountSecond 0.27
231 TestMountStart/serial/DeleteFirst 1.56
232 TestMountStart/serial/VerifyMountPostDelete 0.28
233 TestMountStart/serial/Stop 1.29
234 TestMountStart/serial/RestartStopped 8.66
235 TestMountStart/serial/VerifyMountPostStop 0.26
238 TestMultiNode/serial/FreshStart2Nodes 85.4
239 TestMultiNode/serial/DeployApp2Nodes 5.18
240 TestMultiNode/serial/PingHostFrom2Pods 1.03
241 TestMultiNode/serial/AddNode 34.31
242 TestMultiNode/serial/MultiNodeLabels 0.11
243 TestMultiNode/serial/ProfileList 0.71
244 TestMultiNode/serial/CopyFile 10.49
245 TestMultiNode/serial/StopNode 2.41
246 TestMultiNode/serial/StartAfterStop 9.58
247 TestMultiNode/serial/RestartKeepsNodes 76.01
248 TestMultiNode/serial/DeleteNode 5.79
249 TestMultiNode/serial/StopMultiNode 22.04
250 TestMultiNode/serial/RestartMultiNode 50.86
251 TestMultiNode/serial/ValidateNameConflict 33.11
258 TestScheduledStopUnix 101.7
259 TestSkaffold 137.09
261 TestInsufficientStorage 12.86
262 TestRunningBinaryUpgrade 353.26
264 TestKubernetesUpgrade 96.5
265 TestMissingContainerUpgrade 89.64
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
268 TestNoKubernetes/serial/StartWithK8s 36.85
269 TestNoKubernetes/serial/StartWithStopK8s 9.4
270 TestNoKubernetes/serial/Start 9.18
271 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
273 TestNoKubernetes/serial/ProfileList 1.1
274 TestNoKubernetes/serial/Stop 1.33
275 TestNoKubernetes/serial/StartNoArgs 7.9
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
288 TestStoppedBinaryUpgrade/Setup 0.86
289 TestStoppedBinaryUpgrade/Upgrade 318.95
290 TestPreload/Start-NoPreload-PullImage 96.66
291 TestPreload/Restart-With-Preload-Check-User-Image 54.96
301 TestPause/serial/Start 48.02
302 TestPause/serial/SecondStartNoReconfiguration 39.93
303 TestPause/serial/Pause 0.7
304 TestPause/serial/VerifyStatus 0.33
305 TestPause/serial/Unpause 0.56
306 TestPause/serial/PauseAgain 0.97
307 TestPause/serial/DeletePaused 2.44
308 TestPause/serial/VerifyDeletedResources 0.54
309 TestNetworkPlugins/group/auto/Start 67.9
310 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
311 TestNetworkPlugins/group/kindnet/Start 62.72
312 TestNetworkPlugins/group/auto/KubeletFlags 0.32
313 TestNetworkPlugins/group/auto/NetCatPod 9.4
314 TestNetworkPlugins/group/auto/DNS 0.25
315 TestNetworkPlugins/group/auto/Localhost 0.18
316 TestNetworkPlugins/group/auto/HairPin 0.16
317 TestNetworkPlugins/group/calico/Start 67.9
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
320 TestNetworkPlugins/group/kindnet/NetCatPod 12.44
321 TestNetworkPlugins/group/kindnet/DNS 0.29
322 TestNetworkPlugins/group/kindnet/Localhost 0.29
323 TestNetworkPlugins/group/kindnet/HairPin 0.28
324 TestNetworkPlugins/group/custom-flannel/Start 53.87
325 TestNetworkPlugins/group/calico/ControllerPod 6.01
326 TestNetworkPlugins/group/calico/KubeletFlags 0.36
327 TestNetworkPlugins/group/calico/NetCatPod 11.35
328 TestNetworkPlugins/group/calico/DNS 0.24
329 TestNetworkPlugins/group/calico/Localhost 0.2
330 TestNetworkPlugins/group/calico/HairPin 0.21
331 TestNetworkPlugins/group/false/Start 71.66
332 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
333 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.25
334 TestNetworkPlugins/group/custom-flannel/DNS 0.26
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
337 TestNetworkPlugins/group/enable-default-cni/Start 47.35
338 TestNetworkPlugins/group/false/KubeletFlags 0.34
339 TestNetworkPlugins/group/false/NetCatPod 11.38
340 TestNetworkPlugins/group/false/DNS 0.25
341 TestNetworkPlugins/group/false/Localhost 0.16
342 TestNetworkPlugins/group/false/HairPin 0.19
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.44
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.36
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
348 TestNetworkPlugins/group/flannel/Start 55.2
349 TestNetworkPlugins/group/bridge/Start 71.6
350 TestNetworkPlugins/group/flannel/ControllerPod 6
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
352 TestNetworkPlugins/group/flannel/NetCatPod 11.33
353 TestNetworkPlugins/group/flannel/DNS 0.2
354 TestNetworkPlugins/group/flannel/Localhost 0.18
355 TestNetworkPlugins/group/flannel/HairPin 0.18
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
357 TestNetworkPlugins/group/bridge/NetCatPod 13.39
358 TestNetworkPlugins/group/kubenet/Start 73.85
359 TestNetworkPlugins/group/bridge/DNS 0.24
360 TestNetworkPlugins/group/bridge/Localhost 0.22
361 TestNetworkPlugins/group/bridge/HairPin 0.21
362 TestPreload/PreloadSrc/gcs 4.7
363 TestPreload/PreloadSrc/github 3.84
364 TestPreload/PreloadSrc/gcs-cached 0.84
366 TestStartStop/group/old-k8s-version/serial/FirstStart 60.23
367 TestNetworkPlugins/group/kubenet/KubeletFlags 0.5
368 TestNetworkPlugins/group/kubenet/NetCatPod 10.43
369 TestNetworkPlugins/group/kubenet/DNS 0.23
370 TestNetworkPlugins/group/kubenet/Localhost 0.17
371 TestNetworkPlugins/group/kubenet/HairPin 0.2
372 TestStartStop/group/old-k8s-version/serial/DeployApp 11.53
374 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47.63
375 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.61
376 TestStartStop/group/old-k8s-version/serial/Stop 11.75
377 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
378 TestStartStop/group/old-k8s-version/serial/SecondStart 53.55
379 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
381 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.49
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.04
384 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
386 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
387 TestStartStop/group/old-k8s-version/serial/Pause 4.27
389 TestStartStop/group/embed-certs/serial/FirstStart 70.97
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.43
395 TestStartStop/group/no-preload/serial/FirstStart 52.26
396 TestStartStop/group/embed-certs/serial/DeployApp 9.47
397 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.65
398 TestStartStop/group/embed-certs/serial/Stop 11.34
399 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
400 TestStartStop/group/embed-certs/serial/SecondStart 52.33
401 TestStartStop/group/no-preload/serial/DeployApp 9.52
402 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.48
403 TestStartStop/group/no-preload/serial/Stop 11.92
404 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
405 TestStartStop/group/no-preload/serial/SecondStart 28.57
406 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
407 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
408 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
409 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
410 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
411 TestStartStop/group/embed-certs/serial/Pause 4.49
412 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
413 TestStartStop/group/no-preload/serial/Pause 4.7
415 TestStartStop/group/newest-cni/serial/FirstStart 34.69
416 TestStartStop/group/newest-cni/serial/DeployApp 0
417 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
418 TestStartStop/group/newest-cni/serial/Stop 6.11
419 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/newest-cni/serial/SecondStart 16.95
421 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
422 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
423 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
424 TestStartStop/group/newest-cni/serial/Pause 2.96
x
+
TestDownloadOnly/v1.28.0/json-events (5.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-304779 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-304779 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.673209867s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1228 06:28:03.737370    4202 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1228 06:28:03.737444    4202 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-304779
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-304779: exit status 85 (83.968281ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-304779 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-304779 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:27:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:27:58.110117    4208 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:27:58.110302    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:58.110324    4208 out.go:374] Setting ErrFile to fd 2...
	I1228 06:27:58.110353    4208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:27:58.110631    4208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	W1228 06:27:58.110792    4208 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22352-2382/.minikube/config/config.json: open /home/jenkins/minikube-integration/22352-2382/.minikube/config/config.json: no such file or directory
	I1228 06:27:58.111242    4208 out.go:368] Setting JSON to true
	I1228 06:27:58.112027    4208 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":627,"bootTime":1766902651,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 06:27:58.112153    4208 start.go:143] virtualization:  
	I1228 06:27:58.118098    4208 out.go:99] [download-only-304779] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1228 06:27:58.118320    4208 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball: no such file or directory
	I1228 06:27:58.118457    4208 notify.go:221] Checking for updates...
	I1228 06:27:58.123266    4208 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:27:58.126771    4208 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:27:58.130042    4208 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 06:27:58.133209    4208 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 06:27:58.136345    4208 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1228 06:27:58.142375    4208 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:27:58.142646    4208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:27:58.167837    4208 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:27:58.167949    4208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:58.572426    4208 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-28 06:27:58.562998309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:27:58.572527    4208 docker.go:319] overlay module found
	I1228 06:27:58.575803    4208 out.go:99] Using the docker driver based on user configuration
	I1228 06:27:58.575836    4208 start.go:309] selected driver: docker
	I1228 06:27:58.575844    4208 start.go:928] validating driver "docker" against <nil>
	I1228 06:27:58.575941    4208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:27:58.635128    4208 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-28 06:27:58.626377947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:27:58.635275    4208 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:27:58.635574    4208 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1228 06:27:58.635740    4208 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:27:58.638971    4208 out.go:171] Using Docker driver with root privileges
	I1228 06:27:58.642083    4208 cni.go:84] Creating CNI manager for ""
	I1228 06:27:58.642153    4208 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1228 06:27:58.642169    4208 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1228 06:27:58.642247    4208 start.go:353] cluster config:
	{Name:download-only-304779 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-304779 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:27:58.645312    4208 out.go:99] Starting "download-only-304779" primary control-plane node in "download-only-304779" cluster
	I1228 06:27:58.645338    4208 cache.go:134] Beginning downloading kic base image for docker with docker
	I1228 06:27:58.648306    4208 out.go:99] Pulling base image v0.0.48-1766884053-22351 ...
	I1228 06:27:58.648349    4208 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1228 06:27:58.648494    4208 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local docker daemon
	I1228 06:27:58.664450    4208 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:27:58.664648    4208 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 in local cache directory
	I1228 06:27:58.664764    4208 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 to local cache
	I1228 06:27:58.695331    4208 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1228 06:27:58.695368    4208 cache.go:65] Caching tarball of preloaded images
	I1228 06:27:58.695543    4208 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1228 06:27:58.698926    4208 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1228 06:27:58.698960    4208 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1228 06:27:58.698968    4208 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1228 06:27:58.781060    4208 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1228 06:27:58.781200    4208 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1228 06:28:01.852294    4208 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1228 06:28:01.852868    4208 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/download-only-304779/config.json ...
	I1228 06:28:01.852954    4208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/download-only-304779/config.json: {Name:mk31a18e5b9b796ebf0fb4087d466ab772da8ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1228 06:28:01.853383    4208 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1228 06:28:01.853666    4208 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22352-2382/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-304779 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304779"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-304779
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-802951 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-802951 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.171337566s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1228 06:28:07.351504    4202 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1228 06:28:07.351537    4202 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-802951
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-802951: exit status 85 (91.004474ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-304779 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-304779 │ jenkins │ v1.37.0 │ 28 Dec 25 06:27 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ delete  │ -p download-only-304779                                                                                                                                                       │ download-only-304779 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │ 28 Dec 25 06:28 UTC │
	│ start   │ -o=json --download-only -p download-only-802951 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-802951 │ jenkins │ v1.37.0 │ 28 Dec 25 06:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/28 06:28:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1228 06:28:04.222781    4409 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:28:04.222958    4409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:04.222987    4409 out.go:374] Setting ErrFile to fd 2...
	I1228 06:28:04.223008    4409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:28:04.223286    4409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:28:04.223716    4409 out.go:368] Setting JSON to true
	I1228 06:28:04.224490    4409 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":634,"bootTime":1766902651,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 06:28:04.224587    4409 start.go:143] virtualization:  
	I1228 06:28:04.227969    4409 out.go:99] [download-only-802951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 06:28:04.228213    4409 notify.go:221] Checking for updates...
	I1228 06:28:04.231170    4409 out.go:171] MINIKUBE_LOCATION=22352
	I1228 06:28:04.234377    4409 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:28:04.237340    4409 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 06:28:04.240395    4409 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 06:28:04.243386    4409 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1228 06:28:04.249327    4409 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1228 06:28:04.249593    4409 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:28:04.277836    4409 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:28:04.277946    4409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:04.350630    4409 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-28 06:28:04.341815439 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:28:04.350737    4409 docker.go:319] overlay module found
	I1228 06:28:04.353700    4409 out.go:99] Using the docker driver based on user configuration
	I1228 06:28:04.353734    4409 start.go:309] selected driver: docker
	I1228 06:28:04.353740    4409 start.go:928] validating driver "docker" against <nil>
	I1228 06:28:04.353848    4409 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:28:04.410554    4409 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-28 06:28:04.401191681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:28:04.410717    4409 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1228 06:28:04.410980    4409 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1228 06:28:04.411128    4409 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1228 06:28:04.414197    4409 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-802951 host does not exist
	  To start a cluster, run: "minikube start -p download-only-802951"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-802951
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1228 06:28:08.482636    4202 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-163241 --alsologtostderr --binary-mirror http://127.0.0.1:37223 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-163241" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-163241
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (77.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-575789 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-575789 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m14.192819006s)
helpers_test.go:176: Cleaning up "offline-docker-575789" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-575789
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-575789: (3.523025351s)
--- PASS: TestOffline (77.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-201219
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-201219: exit status 85 (86.109052ms)

                                                
                                                
-- stdout --
	* Profile "addons-201219" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-201219"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-201219
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-201219: exit status 85 (94.615244ms)

                                                
                                                
-- stdout --
	* Profile "addons-201219" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-201219"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (138.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-201219 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-201219 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.308809766s)
--- PASS: TestAddons/Setup (138.31s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.55s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 39.318002ms
addons_test.go:886: volcano-controller stabilized in 39.93759ms
addons_test.go:870: volcano-scheduler stabilized in 40.365217ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-m6l82" [746f3100-f83b-4260-a278-7ae4a3e446a6] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.010706825s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-wk7ff" [a9f68169-23b0-41c1-ab8b-8f7f6989dff0] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003562637s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-6z7tj" [309a210f-b3f0-4e88-8b5b-4399d154ea7a] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003226024s
addons_test.go:905: (dbg) Run:  kubectl --context addons-201219 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-201219 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-201219 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [ec3b377b-1256-47a3-923b-613d8066cdf5] Pending
helpers_test.go:353: "test-job-nginx-0" [ec3b377b-1256-47a3-923b-613d8066cdf5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [ec3b377b-1256-47a3-923b-613d8066cdf5] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004478597s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable volcano --alsologtostderr -v=1: (11.91949722s)
--- PASS: TestAddons/serial/Volcano (42.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-201219 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-201219 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-201219 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-201219 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3cd61b5e-ed2d-45ff-8ae6-cd44eef11236] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3cd61b5e-ed2d-45ff-8ae6-cd44eef11236] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00299574s
addons_test.go:696: (dbg) Run:  kubectl --context addons-201219 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-201219 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-201219 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-201219 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.814358ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-ms62r" [8c06a985-b4c3-416c-b3d7-bb247938f684] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003006903s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-9z6tk" [428c2bbf-2387-49a9-92a6-3b40b22507e9] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00400488s
addons_test.go:394: (dbg) Run:  kubectl --context addons-201219 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-201219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-201219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.927512386s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 ip
2025/12/28 06:31:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.636449ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-201219
addons_test.go:334: (dbg) Run:  kubectl --context addons-201219 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-201219 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-201219 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-201219 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [1df36ea5-c529-4169-8398-6c7c5d74800d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [1df36ea5-c529-4169-8398-6c7c5d74800d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004021674s
I1228 06:32:22.456516    4202 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-201219 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable ingress-dns --alsologtostderr -v=1: (1.394822748s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable ingress --alsologtostderr -v=1: (7.856761547s)
--- PASS: TestAddons/parallel/Ingress (21.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-czskl" [d88a7942-85b6-4ed7-b530-2089e1ed8564] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004331689s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable inspektor-gadget --alsologtostderr -v=1: (5.769145619s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.654543ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-wl5jv" [c8643eb5-c6ab-4cc7-a0bd-eecc66bed87d] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0036375s
addons_test.go:465: (dbg) Run:  kubectl --context addons-201219 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1228 06:32:05.036293    4202 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1228 06:32:05.041393    4202 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1228 06:32:05.041419    4202 kapi.go:107] duration metric: took 8.799539ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 8.809902ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-201219 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-201219 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [b1107f3e-19ca-479e-9de2-88d6718b53e0] Pending
helpers_test.go:353: "task-pv-pod" [b1107f3e-19ca-479e-9de2-88d6718b53e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [b1107f3e-19ca-479e-9de2-88d6718b53e0] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.002843493s
addons_test.go:574: (dbg) Run:  kubectl --context addons-201219 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-201219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-201219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-201219 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-201219 delete pod task-pv-pod: (1.426871454s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-201219 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-201219 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-201219 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [ca425970-f667-443a-9e34-40c025de0070] Pending
helpers_test.go:353: "task-pv-pod-restore" [ca425970-f667-443a-9e34-40c025de0070] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [ca425970-f667-443a-9e34-40c025de0070] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003792866s
addons_test.go:616: (dbg) Run:  kubectl --context addons-201219 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-201219 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-201219 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80316973s)
--- PASS: TestAddons/parallel/CSI (62.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-201219 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-tbpnc" [19f26f2a-8a0c-45d6-9d22-e9ddf366027f] Pending
helpers_test.go:353: "headlamp-6d8d595f-tbpnc" [19f26f2a-8a0c-45d6-9d22-e9ddf366027f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-tbpnc" [19f26f2a-8a0c-45d6-9d22-e9ddf366027f] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004539935s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable headlamp --alsologtostderr -v=1: (5.944312579s)
--- PASS: TestAddons/parallel/Headlamp (17.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-jdznh" [33f4f69e-12c8-4888-9b0a-576409436385] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006809286s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-201219 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-201219 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [cd8d9f70-0634-492e-83f1-73e188e6833a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [cd8d9f70-0634-492e-83f1-73e188e6833a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [cd8d9f70-0634-492e-83f1-73e188e6833a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003008876s
addons_test.go:969: (dbg) Run:  kubectl --context addons-201219 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 ssh "cat /opt/local-path-provisioner/pvc-a3d88987-322e-4d27-88a5-e9aecbc270c9_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-201219 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-201219 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.70s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-gknmn" [dcf81aa2-55d7-4895-804f-ce579bd685b9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010535195s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-wht5z" [c8376cfb-0532-4c10-befa-8b36bc9bf052] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004093806s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-201219 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-201219 addons disable yakd --alsologtostderr -v=1: (5.859601036s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-201219
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-201219: (11.10914618s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-201219
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-201219
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-201219
--- PASS: TestAddons/StoppedEnableDisable (11.38s)

                                                
                                    
x
+
TestCertOptions (34.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-600808 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-600808 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (31.820410668s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-600808 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-600808 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-600808 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-600808" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-600808
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-600808: (2.256271878s)
--- PASS: TestCertOptions (34.80s)

                                                
                                    
x
+
TestCertExpiration (244.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-574802 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1228 07:14:42.764895    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:14:48.269554    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-574802 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.310596246s)
E1228 07:15:27.660305    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-574802 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-574802 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (28.189090214s)
helpers_test.go:176: Cleaning up "cert-expiration-574802" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-574802
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-574802: (2.378260039s)
--- PASS: TestCertExpiration (244.88s)

                                                
                                    
x
+
TestDockerFlags (36.71s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-974112 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-974112 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.5333343s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-974112 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-974112 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-974112" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-974112
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-974112: (2.462608201s)
--- PASS: TestDockerFlags (36.71s)

                                                
                                    
x
+
TestErrorSpam/setup (29.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-748385 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-748385 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-748385 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-748385 --driver=docker  --container-runtime=docker: (29.760973978s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (29.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (11.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 stop: (11.063101879s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-748385 --log_dir /tmp/nospam-748385 stop
--- PASS: TestErrorSpam/stop (11.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22352-2382/.minikube/files/etc/test/nested/copy/4202/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-723745 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m10.271494125s)
--- PASS: TestFunctional/serial/StartWithProxy (70.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1228 06:35:20.143391    4202 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --alsologtostderr -v=8
E1228 06:35:27.662512    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.668478    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.678757    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.699121    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.739497    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.820005    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:27.980444    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:28.301075    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:28.941402    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:30.221937    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:32.782161    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:35:37.902663    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-723745 --alsologtostderr -v=8: (27.878390159s)
functional_test.go:678: soft start took 27.881266118s for "functional-723745" cluster.
I1228 06:35:48.022112    4202 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (27.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-723745 get po -A
E1228 06:35:48.144383    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 cache add registry.k8s.io/pause:3.1: (1.0330524s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-723745 /tmp/TestFunctionalserialCacheCmdcacheadd_local1567892222/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache add minikube-local-cache-test:functional-723745
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache delete minikube-local-cache-test:functional-723745
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-723745
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.118798ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 kubectl -- --context functional-723745 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-723745 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1228 06:36:08.625519    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-723745 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.200906581s)
functional_test.go:776: restart took 42.201004305s for "functional-723745" cluster.
I1228 06:36:36.809733    4202 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (42.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-723745 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 logs: (1.231189079s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 logs --file /tmp/TestFunctionalserialLogsFileCmd3833269833/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 logs --file /tmp/TestFunctionalserialLogsFileCmd3833269833/001/logs.txt: (1.254012147s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-723745 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-723745
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-723745: exit status 115 (381.980008ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30126 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-723745 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-723745 delete -f testdata/invalidsvc.yaml: (1.195902868s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 config get cpus: exit status 14 (64.320697ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 config get cpus: exit status 14 (99.194613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-723745 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-723745 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 48011: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-723745 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (182.587948ms)

                                                
                                                
-- stdout --
	* [functional-723745] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:37:17.723380   47452 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:37:17.723494   47452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:17.723504   47452 out.go:374] Setting ErrFile to fd 2...
	I1228 06:37:17.723509   47452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:17.723770   47452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:37:17.724132   47452 out.go:368] Setting JSON to false
	I1228 06:37:17.725098   47452 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1187,"bootTime":1766902651,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 06:37:17.725172   47452 start.go:143] virtualization:  
	I1228 06:37:17.728338   47452 out.go:179] * [functional-723745] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1228 06:37:17.731172   47452 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:37:17.731319   47452 notify.go:221] Checking for updates...
	I1228 06:37:17.736892   47452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:37:17.739663   47452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 06:37:17.742450   47452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 06:37:17.745282   47452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 06:37:17.748106   47452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:37:17.751523   47452 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:37:17.752161   47452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:37:17.786641   47452 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:37:17.786745   47452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:37:17.841092   47452 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:37:17.832015126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:37:17.841200   47452 docker.go:319] overlay module found
	I1228 06:37:17.844421   47452 out.go:179] * Using the docker driver based on existing profile
	I1228 06:37:17.847328   47452 start.go:309] selected driver: docker
	I1228 06:37:17.847346   47452 start.go:928] validating driver "docker" against &{Name:functional-723745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-723745 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:37:17.847447   47452 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:37:17.850988   47452 out.go:203] 
	W1228 06:37:17.853951   47452 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1228 06:37:17.856860   47452 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723745 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-723745 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (210.128674ms)

                                                
                                                
-- stdout --
	* [functional-723745] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:37:19.399171   47820 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:37:19.399630   47820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:19.399669   47820 out.go:374] Setting ErrFile to fd 2...
	I1228 06:37:19.399691   47820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:37:19.400633   47820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:37:19.401142   47820 out.go:368] Setting JSON to false
	I1228 06:37:19.402053   47820 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1189,"bootTime":1766902651,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1228 06:37:19.402185   47820 start.go:143] virtualization:  
	I1228 06:37:19.407097   47820 out.go:179] * [functional-723745] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1228 06:37:19.409834   47820 out.go:179]   - MINIKUBE_LOCATION=22352
	I1228 06:37:19.409893   47820 notify.go:221] Checking for updates...
	I1228 06:37:19.415968   47820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1228 06:37:19.419114   47820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	I1228 06:37:19.422150   47820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	I1228 06:37:19.425177   47820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1228 06:37:19.428355   47820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1228 06:37:19.431803   47820 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:37:19.432606   47820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1228 06:37:19.456543   47820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1228 06:37:19.456655   47820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:37:19.520873   47820 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-28 06:37:19.509643484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:37:19.520981   47820 docker.go:319] overlay module found
	I1228 06:37:19.526125   47820 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1228 06:37:19.528996   47820 start.go:309] selected driver: docker
	I1228 06:37:19.529016   47820 start.go:928] validating driver "docker" against &{Name:functional-723745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766884053-22351@sha256:2a274089182002e4ae2c5a05f988da35736dc812d4e6b2b8d1dd2036cb8212b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-723745 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1228 06:37:19.529129   47820 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1228 06:37:19.532790   47820 out.go:203] 
	W1228 06:37:19.535924   47820 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1228 06:37:19.538979   47820 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-723745 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-723745 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-hrp95" [ba832403-b34a-4925-bb44-92e4a25966ec] Pending
helpers_test.go:353: "hello-node-connect-5d95464fd4-hrp95" [ba832403-b34a-4925-bb44-92e4a25966ec] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003500628s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31538
functional_test.go:1685: http://192.168.49.2:31538: success! body:
Request served by hello-node-connect-5d95464fd4-hrp95

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31538
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [ed8a4619-0bb7-4612-a91e-e6460d073ceb] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003621885s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-723745 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-723745 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-723745 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-723745 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [91629286-7452-4074-b6fc-8b3e432af679] Pending
helpers_test.go:353: "sp-pod" [91629286-7452-4074-b6fc-8b3e432af679] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [91629286-7452-4074-b6fc-8b3e432af679] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004022461s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-723745 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-723745 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-723745 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e9f60df6-8599-4c96-9a13-a83d06ff5f38] Pending
helpers_test.go:353: "sp-pod" [e9f60df6-8599-4c96-9a13-a83d06ff5f38] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.009795843s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-723745 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh -n functional-723745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cp functional-723745:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2314218117/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh -n functional-723745 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh -n functional-723745 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4202/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /etc/test/nested/copy/4202/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4202.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /etc/ssl/certs/4202.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4202.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /usr/share/ca-certificates/4202.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/42022.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /etc/ssl/certs/42022.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/42022.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /usr/share/ca-certificates/42022.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-723745 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh "sudo systemctl is-active crio": exit status 1 (500.624444ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 42288: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 version -o=json --components: (1.176567031s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723745 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-723745
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723745 image ls --format short --alsologtostderr:
I1228 06:37:28.660309   48851 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:28.660807   48851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:28.660845   48851 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:28.660867   48851 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:28.661358   48851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 06:37:28.662424   48851 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:28.662566   48851 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:28.663272   48851 cli_runner.go:164] Run: docker container inspect functional-723745 --format={{.State.Status}}
I1228 06:37:28.697588   48851 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:28.697645   48851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723745
I1228 06:37:28.734408   48851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/functional-723745/id_rsa Username:docker}
I1228 06:37:28.891577   48851 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723745 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test       │ functional-723745 │ 9c0f009eaf941 │ 30B    │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 962dbbc0e55ec │ 53.7MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-723745 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723745 image ls --format table --alsologtostderr:
I1228 06:37:29.342712   49032 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:29.342875   49032 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.342887   49032 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:29.342892   49032 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.343171   49032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 06:37:29.343936   49032 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.344078   49032 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.344740   49032 cli_runner.go:164] Run: docker container inspect functional-723745 --format={{.State.Status}}
I1228 06:37:29.375113   49032 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:29.375161   49032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723745
I1228 06:37:29.407572   49032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/functional-723745/id_rsa Username:docker}
I1228 06:37:29.509732   49032 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723745 image ls --format json --alsologtostderr:
[{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"9c0f009eaf941fd0ee01132f09e39e3a44c85b226abbfffe5b069b8f430e2113","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-723745"],"size":"30"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe81
91016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3
d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723745 image ls --format json --alsologtostderr:
I1228 06:37:29.009934   48931 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:29.010101   48931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.010647   48931 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:29.010672   48931 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.011008   48931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 06:37:29.011659   48931 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.011857   48931 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.012540   48931 cli_runner.go:164] Run: docker container inspect functional-723745 --format={{.State.Status}}
I1228 06:37:29.068403   48931 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:29.068471   48931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723745
I1228 06:37:29.090232   48931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/functional-723745/id_rsa Username:docker}
I1228 06:37:29.191010   48931 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723745 image ls --format yaml --alsologtostderr:
- id: 9c0f009eaf941fd0ee01132f09e39e3a44c85b226abbfffe5b069b8f430e2113
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-723745
size: "30"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723745 image ls --format yaml --alsologtostderr:
I1228 06:37:28.668936   48856 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:28.669094   48856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:28.669105   48856 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:28.669111   48856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:28.669374   48856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 06:37:28.670112   48856 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:28.670237   48856 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:28.670799   48856 cli_runner.go:164] Run: docker container inspect functional-723745 --format={{.State.Status}}
I1228 06:37:28.699646   48856 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:28.699699   48856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723745
I1228 06:37:28.723328   48856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/functional-723745/id_rsa Username:docker}
I1228 06:37:28.830781   48856 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh pgrep buildkitd: exit status 1 (344.237408ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image build -t localhost/my-image:functional-723745 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 image build -t localhost/my-image:functional-723745 testdata/build --alsologtostderr: (3.000135151s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723745 image build -t localhost/my-image:functional-723745 testdata/build --alsologtostderr:
I1228 06:37:29.322360   49027 out.go:360] Setting OutFile to fd 1 ...
I1228 06:37:29.324389   49027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.324432   49027 out.go:374] Setting ErrFile to fd 2...
I1228 06:37:29.324451   49027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1228 06:37:29.324805   49027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
I1228 06:37:29.325716   49027 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.329582   49027 config.go:182] Loaded profile config "functional-723745": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1228 06:37:29.330153   49027 cli_runner.go:164] Run: docker container inspect functional-723745 --format={{.State.Status}}
I1228 06:37:29.354889   49027 ssh_runner.go:195] Run: systemctl --version
I1228 06:37:29.354948   49027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723745
I1228 06:37:29.378802   49027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/functional-723745/id_rsa Username:docker}
I1228 06:37:29.482936   49027 build_images.go:162] Building image from path: /tmp/build.1202597803.tar
I1228 06:37:29.483011   49027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1228 06:37:29.491915   49027 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1202597803.tar
I1228 06:37:29.495948   49027 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1202597803.tar: stat -c "%s %y" /var/lib/minikube/build/build.1202597803.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1202597803.tar': No such file or directory
I1228 06:37:29.495979   49027 ssh_runner.go:362] scp /tmp/build.1202597803.tar --> /var/lib/minikube/build/build.1202597803.tar (3072 bytes)
I1228 06:37:29.518958   49027 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1202597803
I1228 06:37:29.528887   49027 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1202597803 -xf /var/lib/minikube/build/build.1202597803.tar
I1228 06:37:29.546095   49027 docker.go:364] Building image: /var/lib/minikube/build/build.1202597803
I1228 06:37:29.546169   49027 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-723745 /var/lib/minikube/build/build.1202597803
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:25191b42498b358557a263e17cf4fc27fea5ba238e56775d791f24de23ac7783 done
#8 naming to localhost/my-image:functional-723745 done
#8 DONE 0.1s
I1228 06:37:32.204782   49027 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-723745 /var/lib/minikube/build/build.1202597803: (2.658585665s)
I1228 06:37:32.204852   49027 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1202597803
I1228 06:37:32.213669   49027 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1202597803.tar
I1228 06:37:32.222835   49027 build_images.go:218] Built localhost/my-image:functional-723745 from /tmp/build.1202597803.tar
I1228 06:37:32.222864   49027 build_images.go:134] succeeded building to: functional-723745
I1228 06:37:32.222869   49027 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-723745 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [b0f697b6-a175-46b4-8b54-acf8fe2b0ebf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [b0f697b6-a175-46b4-8b54-acf8fe2b0ebf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003753976s
I1228 06:36:55.630484    4202 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 --alsologtostderr
E1228 06:36:49.586290    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-723745 docker-env) && out/minikube-linux-arm64 status -p functional-723745"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-723745 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-723745 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.229.5 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-723745 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 44505: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdany-port2426479520/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766903816003136334" to /tmp/TestFunctionalparallelMountCmdany-port2426479520/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766903816003136334" to /tmp/TestFunctionalparallelMountCmdany-port2426479520/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766903816003136334" to /tmp/TestFunctionalparallelMountCmdany-port2426479520/001/test-1766903816003136334
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (411.68745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:36:56.415977    4202 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 28 06:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 28 06:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 28 06:36 test-1766903816003136334
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh cat /mount-9p/test-1766903816003136334
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-723745 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [8b766359-c076-470b-8b1d-246d302b5aba] Pending
helpers_test.go:353: "busybox-mount" [8b766359-c076-470b-8b1d-246d302b5aba] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [8b766359-c076-470b-8b1d-246d302b5aba] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [8b766359-c076-470b-8b1d-246d302b5aba] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003942464s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-723745 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdany-port2426479520/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdspecific-port330650414/001:/mount-9p --alsologtostderr -v=1 --port 34763]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (343.92647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1228 06:37:04.415610    4202 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdspecific-port330650414/001:/mount-9p --alsologtostderr -v=1 --port 34763] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723745 ssh "sudo umount -f /mount-9p": exit status 1 (288.820821ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-723745 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdspecific-port330650414/001:/mount-9p --alsologtostderr -v=1 --port 34763] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-723745 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723745 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2534530741/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-723745 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-723745 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-h88gd" [6063a349-c54a-4899-9952-922464e90d91] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-h88gd" [6063a349-c54a-4899-9952-922464e90d91] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003990812s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "431.860984ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "59.418806ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "357.514259ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "51.049163ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service list
functional_test.go:1474: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 service list: (1.397846316s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service list -o json
2025/12/28 06:37:27 [DEBUG] GET http://127.0.0.1:46533/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1504: (dbg) Done: out/minikube-linux-arm64 -p functional-723745 service list -o json: (1.577363724s)
functional_test.go:1509: Took "1.577454564s" to run "out/minikube-linux-arm64 -p functional-723745 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:30698
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-723745 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:30698
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-723745
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-723745
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-723745
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (186.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1228 06:38:11.508347    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:40:27.660696    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (3m5.831517752s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (186.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 kubectl -- rollout status deployment/busybox: (4.449523673s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-pdgjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-s4n5w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-vrjp4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-pdgjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-s4n5w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-vrjp4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-pdgjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-s4n5w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-vrjp4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-pdgjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-pdgjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-s4n5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-s4n5w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-vrjp4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 kubectl -- exec busybox-769dd8b7dd-vrjp4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node add --alsologtostderr -v 5
E1228 06:40:55.348807    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 node add --alsologtostderr -v 5: (35.246318472s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5: (1.08224637s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-470825 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.101292544s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 status --output json --alsologtostderr -v 5: (1.102612197s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp testdata/cp-test.txt ha-470825:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854750187/001/cp-test_ha-470825.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825:/home/docker/cp-test.txt ha-470825-m02:/home/docker/cp-test_ha-470825_ha-470825-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test_ha-470825_ha-470825-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825:/home/docker/cp-test.txt ha-470825-m03:/home/docker/cp-test_ha-470825_ha-470825-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test_ha-470825_ha-470825-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825:/home/docker/cp-test.txt ha-470825-m04:/home/docker/cp-test_ha-470825_ha-470825-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test_ha-470825_ha-470825-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp testdata/cp-test.txt ha-470825-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854750187/001/cp-test_ha-470825-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m02:/home/docker/cp-test.txt ha-470825:/home/docker/cp-test_ha-470825-m02_ha-470825.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test_ha-470825-m02_ha-470825.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m02:/home/docker/cp-test.txt ha-470825-m03:/home/docker/cp-test_ha-470825-m02_ha-470825-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test_ha-470825-m02_ha-470825-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m02:/home/docker/cp-test.txt ha-470825-m04:/home/docker/cp-test_ha-470825-m02_ha-470825-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test_ha-470825-m02_ha-470825-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp testdata/cp-test.txt ha-470825-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854750187/001/cp-test_ha-470825-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m03:/home/docker/cp-test.txt ha-470825:/home/docker/cp-test_ha-470825-m03_ha-470825.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test_ha-470825-m03_ha-470825.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m03:/home/docker/cp-test.txt ha-470825-m02:/home/docker/cp-test_ha-470825-m03_ha-470825-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test_ha-470825-m03_ha-470825-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m03:/home/docker/cp-test.txt ha-470825-m04:/home/docker/cp-test_ha-470825-m03_ha-470825-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test_ha-470825-m03_ha-470825-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp testdata/cp-test.txt ha-470825-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3854750187/001/cp-test_ha-470825-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test.txt"
E1228 06:41:45.221882    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.228923    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.239449    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.259748    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m04:/home/docker/cp-test.txt ha-470825:/home/docker/cp-test_ha-470825-m04_ha-470825.txt
E1228 06:41:45.300371    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.381500    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:45.541848    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test.txt"
E1228 06:41:45.862422    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825 "sudo cat /home/docker/cp-test_ha-470825-m04_ha-470825.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m04:/home/docker/cp-test.txt ha-470825-m02:/home/docker/cp-test_ha-470825-m04_ha-470825-m02.txt
E1228 06:41:46.502944    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m02 "sudo cat /home/docker/cp-test_ha-470825-m04_ha-470825-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 cp ha-470825-m04:/home/docker/cp-test.txt ha-470825-m03:/home/docker/cp-test_ha-470825-m04_ha-470825-m03.txt
E1228 06:41:47.783790    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 ssh -n ha-470825-m03 "sudo cat /home/docker/cp-test_ha-470825-m04_ha-470825-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node stop m02 --alsologtostderr -v 5
E1228 06:41:50.344375    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:41:55.464937    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 node stop m02 --alsologtostderr -v 5: (11.458555445s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5: exit status 7 (897.975913ms)

                                                
                                                
-- stdout --
	ha-470825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-470825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470825-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-470825-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:42:00.378170   71245 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:42:00.378397   71245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:42:00.378405   71245 out.go:374] Setting ErrFile to fd 2...
	I1228 06:42:00.378415   71245 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:42:00.378822   71245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:42:00.379078   71245 out.go:368] Setting JSON to false
	I1228 06:42:00.379152   71245 mustload.go:66] Loading cluster: ha-470825
	I1228 06:42:00.379613   71245 config.go:182] Loaded profile config "ha-470825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:42:00.379630   71245 status.go:174] checking status of ha-470825 ...
	I1228 06:42:00.380171   71245 cli_runner.go:164] Run: docker container inspect ha-470825 --format={{.State.Status}}
	I1228 06:42:00.385752   71245 notify.go:221] Checking for updates...
	I1228 06:42:00.414594   71245 status.go:371] ha-470825 host status = "Running" (err=<nil>)
	I1228 06:42:00.414630   71245 host.go:66] Checking if "ha-470825" exists ...
	I1228 06:42:00.415071   71245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470825
	I1228 06:42:00.463346   71245 host.go:66] Checking if "ha-470825" exists ...
	I1228 06:42:00.463699   71245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:42:00.463741   71245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470825
	I1228 06:42:00.490940   71245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/ha-470825/id_rsa Username:docker}
	I1228 06:42:00.590163   71245 ssh_runner.go:195] Run: systemctl --version
	I1228 06:42:00.596963   71245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:42:00.612834   71245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:42:00.682404   71245 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-28 06:42:00.671461261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:42:00.683005   71245 kubeconfig.go:125] found "ha-470825" server: "https://192.168.49.254:8443"
	I1228 06:42:00.683038   71245 api_server.go:166] Checking apiserver status ...
	I1228 06:42:00.683087   71245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:42:00.698272   71245 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2244/cgroup
	I1228 06:42:00.706805   71245 api_server.go:192] apiserver freezer: "4:freezer:/docker/6367884f11ab3ce968c8576869e7d4edec096a2eb039aadfeae3fe24b9a1ce1e/kubepods/burstable/pod69e05cc814e8e90766df892e705bbc66/1f36595b461d16500fa4f1c0122b565cec82b6849f81d5e842a49d9842552d1d"
	I1228 06:42:00.706872   71245 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6367884f11ab3ce968c8576869e7d4edec096a2eb039aadfeae3fe24b9a1ce1e/kubepods/burstable/pod69e05cc814e8e90766df892e705bbc66/1f36595b461d16500fa4f1c0122b565cec82b6849f81d5e842a49d9842552d1d/freezer.state
	I1228 06:42:00.714944   71245 api_server.go:214] freezer state: "THAWED"
	I1228 06:42:00.714973   71245 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:42:00.726341   71245 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:42:00.726373   71245 status.go:463] ha-470825 apiserver status = Running (err=<nil>)
	I1228 06:42:00.726384   71245 status.go:176] ha-470825 status: &{Name:ha-470825 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:42:00.726425   71245 status.go:174] checking status of ha-470825-m02 ...
	I1228 06:42:00.726751   71245 cli_runner.go:164] Run: docker container inspect ha-470825-m02 --format={{.State.Status}}
	I1228 06:42:00.746262   71245 status.go:371] ha-470825-m02 host status = "Stopped" (err=<nil>)
	I1228 06:42:00.746289   71245 status.go:384] host is not running, skipping remaining checks
	I1228 06:42:00.746296   71245 status.go:176] ha-470825-m02 status: &{Name:ha-470825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:42:00.746315   71245 status.go:174] checking status of ha-470825-m03 ...
	I1228 06:42:00.746649   71245 cli_runner.go:164] Run: docker container inspect ha-470825-m03 --format={{.State.Status}}
	I1228 06:42:00.766772   71245 status.go:371] ha-470825-m03 host status = "Running" (err=<nil>)
	I1228 06:42:00.766796   71245 host.go:66] Checking if "ha-470825-m03" exists ...
	I1228 06:42:00.767269   71245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470825-m03
	I1228 06:42:00.793249   71245 host.go:66] Checking if "ha-470825-m03" exists ...
	I1228 06:42:00.793554   71245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:42:00.793589   71245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470825-m03
	I1228 06:42:00.814571   71245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/ha-470825-m03/id_rsa Username:docker}
	I1228 06:42:00.918215   71245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:42:00.936373   71245 kubeconfig.go:125] found "ha-470825" server: "https://192.168.49.254:8443"
	I1228 06:42:00.936405   71245 api_server.go:166] Checking apiserver status ...
	I1228 06:42:00.936452   71245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:42:00.950090   71245 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2096/cgroup
	I1228 06:42:00.959377   71245 api_server.go:192] apiserver freezer: "4:freezer:/docker/1bc6f4475684de844c1bd018960ab3442d32db11016ea7289a8f6e9a07c8f4f2/kubepods/burstable/podf441023d881a2ac9a1fc03f6f4250f2b/cd000158d315eb68e86d9b5bb74c0352f0ce560d5f79fa9c3cb17af97a9a3914"
	I1228 06:42:00.959473   71245 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1bc6f4475684de844c1bd018960ab3442d32db11016ea7289a8f6e9a07c8f4f2/kubepods/burstable/podf441023d881a2ac9a1fc03f6f4250f2b/cd000158d315eb68e86d9b5bb74c0352f0ce560d5f79fa9c3cb17af97a9a3914/freezer.state
	I1228 06:42:00.967920   71245 api_server.go:214] freezer state: "THAWED"
	I1228 06:42:00.967952   71245 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1228 06:42:00.976599   71245 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1228 06:42:00.976634   71245 status.go:463] ha-470825-m03 apiserver status = Running (err=<nil>)
	I1228 06:42:00.976674   71245 status.go:176] ha-470825-m03 status: &{Name:ha-470825-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:42:00.976697   71245 status.go:174] checking status of ha-470825-m04 ...
	I1228 06:42:00.977036   71245 cli_runner.go:164] Run: docker container inspect ha-470825-m04 --format={{.State.Status}}
	I1228 06:42:00.993679   71245 status.go:371] ha-470825-m04 host status = "Running" (err=<nil>)
	I1228 06:42:00.993703   71245 host.go:66] Checking if "ha-470825-m04" exists ...
	I1228 06:42:00.993998   71245 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-470825-m04
	I1228 06:42:01.013423   71245 host.go:66] Checking if "ha-470825-m04" exists ...
	I1228 06:42:01.013848   71245 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:42:01.013907   71245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-470825-m04
	I1228 06:42:01.031959   71245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/ha-470825-m04/id_rsa Username:docker}
	I1228 06:42:01.133937   71245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:42:01.154074   71245 status.go:176] ha-470825-m04 status: &{Name:ha-470825-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node start m02 --alsologtostderr -v 5
E1228 06:42:05.705375    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:42:26.185624    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 node start m02 --alsologtostderr -v 5: (46.77076434s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5: (1.726581536s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (48.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.413687821s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 stop --alsologtostderr -v 5
E1228 06:43:07.146613    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 stop --alsologtostderr -v 5: (35.525209785s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 start --wait true --alsologtostderr -v 5
E1228 06:44:29.067283    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:45:27.660083    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 start --wait true --alsologtostderr -v 5: (2m9.772252185s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 node delete m03 --alsologtostderr -v 5: (10.769003056s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 stop --alsologtostderr -v 5: (33.028054431s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5: exit status 7 (106.425206ms)

                                                
                                                
-- stdout --
	ha-470825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-470825-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:46:23.113465   98875 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:46:23.113576   98875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:46:23.113587   98875 out.go:374] Setting ErrFile to fd 2...
	I1228 06:46:23.113593   98875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:46:23.113850   98875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:46:23.114037   98875 out.go:368] Setting JSON to false
	I1228 06:46:23.114079   98875 mustload.go:66] Loading cluster: ha-470825
	I1228 06:46:23.114152   98875 notify.go:221] Checking for updates...
	I1228 06:46:23.115131   98875 config.go:182] Loaded profile config "ha-470825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:46:23.115156   98875 status.go:174] checking status of ha-470825 ...
	I1228 06:46:23.115703   98875 cli_runner.go:164] Run: docker container inspect ha-470825 --format={{.State.Status}}
	I1228 06:46:23.133723   98875 status.go:371] ha-470825 host status = "Stopped" (err=<nil>)
	I1228 06:46:23.133747   98875 status.go:384] host is not running, skipping remaining checks
	I1228 06:46:23.133755   98875 status.go:176] ha-470825 status: &{Name:ha-470825 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:46:23.133778   98875 status.go:174] checking status of ha-470825-m02 ...
	I1228 06:46:23.134078   98875 cli_runner.go:164] Run: docker container inspect ha-470825-m02 --format={{.State.Status}}
	I1228 06:46:23.153914   98875 status.go:371] ha-470825-m02 host status = "Stopped" (err=<nil>)
	I1228 06:46:23.153940   98875 status.go:384] host is not running, skipping remaining checks
	I1228 06:46:23.153947   98875 status.go:176] ha-470825-m02 status: &{Name:ha-470825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:46:23.153967   98875 status.go:174] checking status of ha-470825-m04 ...
	I1228 06:46:23.154273   98875 cli_runner.go:164] Run: docker container inspect ha-470825-m04 --format={{.State.Status}}
	I1228 06:46:23.175689   98875 status.go:371] ha-470825-m04 host status = "Stopped" (err=<nil>)
	I1228 06:46:23.175708   98875 status.go:384] host is not running, skipping remaining checks
	I1228 06:46:23.175715   98875 status.go:176] ha-470825-m04 status: &{Name:ha-470825-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (66.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1228 06:46:45.222694    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:47:12.908318    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m5.50509765s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (66.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 node add --control-plane --alsologtostderr -v 5: (54.104052288s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-470825 status --alsologtostderr -v 5: (1.142149908s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.213781464s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.21s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-476048 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-476048 --driver=docker  --container-runtime=docker: (29.444724815s)
--- PASS: TestImageBuild/serial/Setup (29.44s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-476048
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-476048: (1.541936277s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-476048
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-476048
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-476048
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-465647 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-465647 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m8.799702145s)
--- PASS: TestJSONOutput/start/Command (68.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-465647 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-465647 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.3s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-465647 --output=json --user=testUser
E1228 06:50:27.660389    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-465647 --output=json --user=testUser: (11.296317619s)
--- PASS: TestJSONOutput/stop/Command (11.30s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-615627 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-615627 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (88.331806ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5caed393-0727-471d-b9fe-141977e82824","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-615627] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fcd6017-8364-4cc5-85bf-01c53926a8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"20d0492b-ab23-4495-bc26-b7f9c60534ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7dc77580-6ba0-4722-a5ea-21da998a26fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig"}}
	{"specversion":"1.0","id":"afcf71b2-b904-4a69-8abb-d59b86c63d8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube"}}
	{"specversion":"1.0","id":"4c81dc46-850d-468b-9885-a306feb974c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4f2881e2-b199-4a71-b1c7-4da329ed1483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4365373d-3a74-4406-bb25-fc30a19e35df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-615627" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-615627
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-319788 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-319788 --network=: (28.646860417s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-319788" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-319788
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-319788: (2.199982101s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-474531 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-474531 --network=bridge: (29.54266638s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-474531" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-474531
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-474531: (2.057819395s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.63s)

                                                
                                    
x
+
TestKicExistingNetwork (30.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1228 06:51:38.805346    4202 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1228 06:51:38.821581    4202 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1228 06:51:38.822466    4202 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1228 06:51:38.822499    4202 cli_runner.go:164] Run: docker network inspect existing-network
W1228 06:51:38.837679    4202 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1228 06:51:38.837711    4202 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1228 06:51:38.837726    4202 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1228 06:51:38.837828    4202 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1228 06:51:38.854512    4202 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e663f46973f0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e5:53:aa:f4:ad} reservation:<nil>}
I1228 06:51:38.854773    4202 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016cf270}
I1228 06:51:38.854801    4202 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1228 06:51:38.854850    4202 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1228 06:51:38.914523    4202 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-545887 --network=existing-network
E1228 06:51:45.224539    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 06:51:50.709302    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-545887 --network=existing-network: (28.073037633s)
helpers_test.go:176: Cleaning up "existing-network-545887" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-545887
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-545887: (2.124067254s)
I1228 06:52:09.128009    4202 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.34s)

                                                
                                    
x
+
TestKicCustomSubnet (30.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-725731 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-725731 --subnet=192.168.60.0/24: (28.183382787s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-725731 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-725731" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-725731
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-725731: (2.208606369s)
--- PASS: TestKicCustomSubnet (30.42s)

                                                
                                    
x
+
TestKicStaticIP (31.36s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-714832 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-714832 --static-ip=192.168.200.200: (29.046164958s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-714832 ip
helpers_test.go:176: Cleaning up "static-ip-714832" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-714832
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-714832: (2.143186205s)
--- PASS: TestKicStaticIP (31.36s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (63.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-728284 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-728284 --driver=docker  --container-runtime=docker: (28.27145189s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-730877 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-730877 --driver=docker  --container-runtime=docker: (29.311296674s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-728284
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-730877
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-730877" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-730877
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-730877: (2.278844809s)
helpers_test.go:176: Cleaning up "first-728284" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-728284
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-728284: (2.227784534s)
--- PASS: TestMinikubeProfile (63.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-469442 --memory=3072 --mount-string /tmp/TestMountStartserial4169034885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-469442 --memory=3072 --mount-string /tmp/TestMountStartserial4169034885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.935292544s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-469442 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-471657 --memory=3072 --mount-string /tmp/TestMountStartserial4169034885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-471657 --memory=3072 --mount-string /tmp/TestMountStartserial4169034885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.395738167s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-471657 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-469442 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-469442 --alsologtostderr -v=5: (1.558984035s)
--- PASS: TestMountStart/serial/DeleteFirst (1.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-471657 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-471657
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-471657: (1.28680696s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-471657
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-471657: (7.663399308s)
--- PASS: TestMountStart/serial/RestartStopped (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-471657 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-778330 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1228 06:55:27.660358    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-778330 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.835204741s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-778330 -- rollout status deployment/busybox: (3.209061677s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-9xfzw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-s2cm6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-9xfzw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-s2cm6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-9xfzw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-s2cm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-9xfzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-9xfzw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-s2cm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-778330 -- exec busybox-769dd8b7dd-s2cm6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-778330 -v=5 --alsologtostderr
E1228 06:56:45.222253    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-778330 -v=5 --alsologtostderr: (33.608765213s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-778330 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp testdata/cp-test.txt multinode-778330:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80907206/001/cp-test_multinode-778330.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330:/home/docker/cp-test.txt multinode-778330-m02:/home/docker/cp-test_multinode-778330_multinode-778330-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test_multinode-778330_multinode-778330-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330:/home/docker/cp-test.txt multinode-778330-m03:/home/docker/cp-test_multinode-778330_multinode-778330-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test_multinode-778330_multinode-778330-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp testdata/cp-test.txt multinode-778330-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80907206/001/cp-test_multinode-778330-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m02:/home/docker/cp-test.txt multinode-778330:/home/docker/cp-test_multinode-778330-m02_multinode-778330.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test_multinode-778330-m02_multinode-778330.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m02:/home/docker/cp-test.txt multinode-778330-m03:/home/docker/cp-test_multinode-778330-m02_multinode-778330-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test_multinode-778330-m02_multinode-778330-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp testdata/cp-test.txt multinode-778330-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80907206/001/cp-test_multinode-778330-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m03:/home/docker/cp-test.txt multinode-778330:/home/docker/cp-test_multinode-778330-m03_multinode-778330.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330 "sudo cat /home/docker/cp-test_multinode-778330-m03_multinode-778330.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 cp multinode-778330-m03:/home/docker/cp-test.txt multinode-778330-m02:/home/docker/cp-test_multinode-778330-m03_multinode-778330-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 ssh -n multinode-778330-m02 "sudo cat /home/docker/cp-test_multinode-778330-m03_multinode-778330-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-778330 node stop m03: (1.32240413s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-778330 status: exit status 7 (560.155558ms)

                                                
                                                
-- stdout --
	multinode-778330
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-778330-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-778330-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr: exit status 7 (528.140689ms)

                                                
                                                
-- stdout --
	multinode-778330
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-778330-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-778330-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:57:08.235062  171876 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:57:08.235213  171876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:08.235242  171876 out.go:374] Setting ErrFile to fd 2...
	I1228 06:57:08.235256  171876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:57:08.235658  171876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:57:08.235945  171876 out.go:368] Setting JSON to false
	I1228 06:57:08.235992  171876 mustload.go:66] Loading cluster: multinode-778330
	I1228 06:57:08.236762  171876 config.go:182] Loaded profile config "multinode-778330": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:57:08.236787  171876 status.go:174] checking status of multinode-778330 ...
	I1228 06:57:08.237592  171876 cli_runner.go:164] Run: docker container inspect multinode-778330 --format={{.State.Status}}
	I1228 06:57:08.238542  171876 notify.go:221] Checking for updates...
	I1228 06:57:08.256421  171876 status.go:371] multinode-778330 host status = "Running" (err=<nil>)
	I1228 06:57:08.256444  171876 host.go:66] Checking if "multinode-778330" exists ...
	I1228 06:57:08.256735  171876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-778330
	I1228 06:57:08.287236  171876 host.go:66] Checking if "multinode-778330" exists ...
	I1228 06:57:08.287538  171876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:08.287588  171876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-778330
	I1228 06:57:08.306850  171876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/multinode-778330/id_rsa Username:docker}
	I1228 06:57:08.401525  171876 ssh_runner.go:195] Run: systemctl --version
	I1228 06:57:08.408075  171876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:08.421119  171876 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1228 06:57:08.484276  171876 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-28 06:57:08.474429025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1228 06:57:08.484810  171876 kubeconfig.go:125] found "multinode-778330" server: "https://192.168.67.2:8443"
	I1228 06:57:08.484844  171876 api_server.go:166] Checking apiserver status ...
	I1228 06:57:08.484896  171876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1228 06:57:08.498503  171876 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2163/cgroup
	I1228 06:57:08.507167  171876 api_server.go:192] apiserver freezer: "4:freezer:/docker/b0acc3640aa3b6682d2a3edcbac098185beef81cfb11af48cb4f70fa77af3eb1/kubepods/burstable/pod1afc55560a11de6a2fd81090143cfb74/ad5bd1596c3671a980911bda2d3d6699e98ccc6833d4dcf564dfb986a81a0155"
	I1228 06:57:08.507248  171876 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b0acc3640aa3b6682d2a3edcbac098185beef81cfb11af48cb4f70fa77af3eb1/kubepods/burstable/pod1afc55560a11de6a2fd81090143cfb74/ad5bd1596c3671a980911bda2d3d6699e98ccc6833d4dcf564dfb986a81a0155/freezer.state
	I1228 06:57:08.515644  171876 api_server.go:214] freezer state: "THAWED"
	I1228 06:57:08.515673  171876 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1228 06:57:08.525081  171876 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1228 06:57:08.525154  171876 status.go:463] multinode-778330 apiserver status = Running (err=<nil>)
	I1228 06:57:08.525171  171876 status.go:176] multinode-778330 status: &{Name:multinode-778330 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:57:08.525189  171876 status.go:174] checking status of multinode-778330-m02 ...
	I1228 06:57:08.525516  171876 cli_runner.go:164] Run: docker container inspect multinode-778330-m02 --format={{.State.Status}}
	I1228 06:57:08.543705  171876 status.go:371] multinode-778330-m02 host status = "Running" (err=<nil>)
	I1228 06:57:08.543731  171876 host.go:66] Checking if "multinode-778330-m02" exists ...
	I1228 06:57:08.544040  171876 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-778330-m02
	I1228 06:57:08.562910  171876 host.go:66] Checking if "multinode-778330-m02" exists ...
	I1228 06:57:08.563328  171876 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1228 06:57:08.563380  171876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-778330-m02
	I1228 06:57:08.585669  171876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22352-2382/.minikube/machines/multinode-778330-m02/id_rsa Username:docker}
	I1228 06:57:08.681515  171876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1228 06:57:08.694278  171876 status.go:176] multinode-778330-m02 status: &{Name:multinode-778330-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:57:08.694313  171876 status.go:174] checking status of multinode-778330-m03 ...
	I1228 06:57:08.694625  171876 cli_runner.go:164] Run: docker container inspect multinode-778330-m03 --format={{.State.Status}}
	I1228 06:57:08.711423  171876 status.go:371] multinode-778330-m03 host status = "Stopped" (err=<nil>)
	I1228 06:57:08.711447  171876 status.go:384] host is not running, skipping remaining checks
	I1228 06:57:08.711454  171876 status.go:176] multinode-778330-m03 status: &{Name:multinode-778330-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-778330 node start m03 -v=5 --alsologtostderr: (8.768822805s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-778330
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-778330
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-778330: (23.111160152s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-778330 --wait=true -v=5 --alsologtostderr
E1228 06:58:08.269354    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-778330 --wait=true -v=5 --alsologtostderr: (52.784850386s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-778330
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-778330 node delete m03: (5.061247563s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-778330 stop: (21.853544902s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-778330 status: exit status 7 (99.617517ms)

                                                
                                                
-- stdout --
	multinode-778330
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-778330-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr: exit status 7 (91.126158ms)

                                                
                                                
-- stdout --
	multinode-778330
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-778330-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1228 06:59:02.103377  185605 out.go:360] Setting OutFile to fd 1 ...
	I1228 06:59:02.103573  185605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:59:02.103604  185605 out.go:374] Setting ErrFile to fd 2...
	I1228 06:59:02.103625  185605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 06:59:02.104020  185605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 06:59:02.104326  185605 out.go:368] Setting JSON to false
	I1228 06:59:02.104380  185605 mustload.go:66] Loading cluster: multinode-778330
	I1228 06:59:02.105073  185605 config.go:182] Loaded profile config "multinode-778330": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 06:59:02.105119  185605 status.go:174] checking status of multinode-778330 ...
	I1228 06:59:02.105879  185605 cli_runner.go:164] Run: docker container inspect multinode-778330 --format={{.State.Status}}
	I1228 06:59:02.106933  185605 notify.go:221] Checking for updates...
	I1228 06:59:02.125904  185605 status.go:371] multinode-778330 host status = "Stopped" (err=<nil>)
	I1228 06:59:02.125924  185605 status.go:384] host is not running, skipping remaining checks
	I1228 06:59:02.125931  185605 status.go:176] multinode-778330 status: &{Name:multinode-778330 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1228 06:59:02.125956  185605 status.go:174] checking status of multinode-778330-m02 ...
	I1228 06:59:02.126265  185605 cli_runner.go:164] Run: docker container inspect multinode-778330-m02 --format={{.State.Status}}
	I1228 06:59:02.149443  185605 status.go:371] multinode-778330-m02 host status = "Stopped" (err=<nil>)
	I1228 06:59:02.149461  185605 status.go:384] host is not running, skipping remaining checks
	I1228 06:59:02.149468  185605 status.go:176] multinode-778330-m02 status: &{Name:multinode-778330-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-778330 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-778330 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.180368462s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-778330 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-778330
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-778330-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-778330-m02 --driver=docker  --container-runtime=docker: exit status 14 (87.305086ms)

                                                
                                                
-- stdout --
	* [multinode-778330-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-778330-m02' is duplicated with machine name 'multinode-778330-m02' in profile 'multinode-778330'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-778330-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-778330-m03 --driver=docker  --container-runtime=docker: (30.016504042s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-778330
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-778330: exit status 80 (769.974724ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-778330 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-778330-m03 already exists in multinode-778330-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-778330-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-778330-m03: (2.188572014s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.11s)

                                                
                                    
x
+
TestScheduledStopUnix (101.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-686921 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-686921 --memory=3072 --driver=docker  --container-runtime=docker: (28.481201297s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-686921 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 07:00:58.945295  199501 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:00:58.945824  199501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:00:58.945861  199501 out.go:374] Setting ErrFile to fd 2...
	I1228 07:00:58.945881  199501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:00:58.946622  199501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:00:58.947084  199501 out.go:368] Setting JSON to false
	I1228 07:00:58.947260  199501 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:00:58.947996  199501 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:00:58.948133  199501 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/scheduled-stop-686921/config.json ...
	I1228 07:00:58.948399  199501 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:00:58.948591  199501 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-686921 -n scheduled-stop-686921
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-686921 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 07:00:59.401859  199593 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:00:59.402144  199593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:00:59.402177  199593 out.go:374] Setting ErrFile to fd 2...
	I1228 07:00:59.402211  199593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:00:59.402503  199593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:00:59.402835  199593 out.go:368] Setting JSON to false
	I1228 07:00:59.403085  199593 daemonize_unix.go:73] killing process 199517 as it is an old scheduled stop
	I1228 07:00:59.403191  199593 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:00:59.403599  199593 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:00:59.403702  199593 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/scheduled-stop-686921/config.json ...
	I1228 07:00:59.403927  199593 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:00:59.404085  199593 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1228 07:00:59.419790    4202 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/scheduled-stop-686921/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-686921 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-686921 -n scheduled-stop-686921
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-686921
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-686921 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1228 07:01:25.316596  200325 out.go:360] Setting OutFile to fd 1 ...
	I1228 07:01:25.316777  200325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:01:25.316808  200325 out.go:374] Setting ErrFile to fd 2...
	I1228 07:01:25.316828  200325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1228 07:01:25.317106  200325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22352-2382/.minikube/bin
	I1228 07:01:25.317408  200325 out.go:368] Setting JSON to false
	I1228 07:01:25.317546  200325 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:01:25.317985  200325 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1228 07:01:25.318098  200325 profile.go:143] Saving config to /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/scheduled-stop-686921/config.json ...
	I1228 07:01:25.318324  200325 mustload.go:66] Loading cluster: scheduled-stop-686921
	I1228 07:01:25.318511  200325 config.go:182] Loaded profile config "scheduled-stop-686921": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1228 07:01:45.222120    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-686921
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-686921: exit status 7 (75.048696ms)

                                                
                                                
-- stdout --
	scheduled-stop-686921
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-686921 -n scheduled-stop-686921
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-686921 -n scheduled-stop-686921: exit status 7 (75.336576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-686921" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-686921
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-686921: (1.636454053s)
--- PASS: TestScheduledStopUnix (101.70s)

                                                
                                    
x
+
TestSkaffold (137.09s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3188426813 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-556203 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-556203 --memory=3072 --driver=docker  --container-runtime=docker: (29.251741622s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3188426813 run --minikube-profile skaffold-556203 --kube-context skaffold-556203 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3188426813 run --minikube-profile skaffold-556203 --kube-context skaffold-556203 --status-check=true --port-forward=false --interactive=false: (1m32.358976934s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-848c7bc988-gnsgp" [73ddd41b-6e8a-47dd-8778-585c0fa2f6fe] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003332535s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-5c78df9d76-nrtqj" [63a6d886-984a-4034-aee8-9e74c8c19e28] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004148263s
helpers_test.go:176: Cleaning up "skaffold-556203" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-556203
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-556203: (3.085987825s)
--- PASS: TestSkaffold (137.09s)

                                                
                                    
x
+
TestInsufficientStorage (12.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-332663 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-332663 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.457686193s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c12e80b9-6f70-4426-8c8a-4bcb00b2743c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-332663] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ab86af9-e722-42dd-a174-2b69b0cbac49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22352"}}
	{"specversion":"1.0","id":"7b2ec999-6a42-46f3-9b9c-debdc6ec1ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8d02cbb-8b98-48a9-ae55-94a424a6a50f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig"}}
	{"specversion":"1.0","id":"ba8cbb07-78b8-4041-8c70-e6bf851b31b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube"}}
	{"specversion":"1.0","id":"3be153c6-67c3-4750-926a-7a632efd1880","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"715d2668-5f96-4ad1-80c0-c5e6c2888b20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"960364f8-c95f-4a46-b605-7925735fa2c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"28f1134a-5e4b-4af4-ad28-31e20aeaa21d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6ab297ef-d994-4815-b2c1-50ab18e6fd1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c34c9ce-5af5-4759-86f5-b1ebf273ef07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c504ff94-6e7b-43b3-817b-2323263924e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-332663\" primary control-plane node in \"insufficient-storage-332663\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff4c9935-98b6-4c8c-9512-e750e19bb11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766884053-22351 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcc7132d-dd2b-4a36-b07c-640cabbe4d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"42a2b33e-b81a-43db-9641-80b3bc7937d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-332663 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-332663 --output=json --layout=cluster: exit status 7 (303.992624ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332663","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332663","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:04:39.946422  210870 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-332663" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-332663 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-332663 --output=json --layout=cluster: exit status 7 (344.403783ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332663","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332663","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1228 07:04:40.289465  210937 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-332663" does not appear in /home/jenkins/minikube-integration/22352-2382/kubeconfig
	E1228 07:04:40.299556  210937 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/insufficient-storage-332663/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-332663" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-332663
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-332663: (1.752115696s)
--- PASS: TestInsufficientStorage (12.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (353.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3194841388 start -p running-upgrade-161368 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3194841388 start -p running-upgrade-161368 --memory=3072 --vm-driver=docker  --container-runtime=docker: (55.981899906s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-161368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1228 07:16:45.221717    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-161368 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m54.033432116s)
helpers_test.go:176: Cleaning up "running-upgrade-161368" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-161368
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-161368: (2.426275911s)
--- PASS: TestRunningBinaryUpgrade (353.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (96.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1228 07:20:27.660709    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.844666792s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-123694 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-123694 --alsologtostderr: (2.235690563s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-123694 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-123694 status --format={{.Host}}: exit status 7 (70.865619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.472523357s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-123694 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (93.125589ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-123694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-123694
	    minikube start -p kubernetes-upgrade-123694 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1236942 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-123694 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-123694 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.928453161s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-123694" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-123694
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-123694: (2.768706816s)
--- PASS: TestKubernetesUpgrade (96.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (89.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.246215672 start -p missing-upgrade-129984 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.246215672 start -p missing-upgrade-129984 --memory=3072 --driver=docker  --container-runtime=docker: (34.791189989s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-129984
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-129984: (1.653823661s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-129984
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-129984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1228 07:19:15.081589    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-129984 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (49.703806747s)
helpers_test.go:176: Cleaning up "missing-upgrade-129984" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-129984
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-129984: (2.585695641s)
--- PASS: TestMissingContainerUpgrade (89.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (123.771292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-245402] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22352
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22352-2382/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22352-2382/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245402 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245402 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.417606406s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-245402 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.280724718s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-245402 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-245402 status -o json: exit status 2 (314.274034ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-245402","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-245402
E1228 07:05:27.660418    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-245402: (1.80368226s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245402 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (9.176063955s)
--- PASS: TestNoKubernetes/serial/Start (9.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22352-2382/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-245402 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-245402 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.289502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-245402
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-245402: (1.326986415s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-245402 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-245402 --driver=docker  --container-runtime=docker: (7.89531605s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-245402 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-245402 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.648955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (318.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.234381960 start -p stopped-upgrade-662448 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.234381960 start -p stopped-upgrade-662448 --memory=3072 --vm-driver=docker  --container-runtime=docker: (45.379030519s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.234381960 -p stopped-upgrade-662448 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.234381960 -p stopped-upgrade-662448 stop: (2.247447439s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-662448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-662448 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m31.326718954s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (318.95s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (96.66s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766139 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E1228 07:21:45.221853    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766139 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m24.585870409s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766139 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-766139
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-766139: (11.235092885s)
--- PASS: TestPreload/Start-NoPreload-PullImage (96.66s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (54.96s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-766139 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-766139 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (54.717565955s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-766139 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (54.96s)

                                                
                                    
x
+
TestPause/serial/Start (48.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-511050 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1228 07:24:15.084301    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-511050 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (48.016912813s)
--- PASS: TestPause/serial/Start (48.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-511050 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1228 07:25:10.710191    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:27.660228    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:25:38.125059    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-511050 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.912189462s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.93s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-511050 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-511050 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-511050 --output=json --layout=cluster: exit status 2 (331.865007ms)

                                                
                                                
-- stdout --
	{"Name":"pause-511050","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-511050","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-511050 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-511050 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-511050 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-511050 --alsologtostderr -v=5: (2.443974403s)
--- PASS: TestPause/serial/DeletePaused (2.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-511050
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-511050: exit status 1 (70.132034ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-511050: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1228 07:26:45.222943    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m7.897502131s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-662448
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m2.715768909s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-436830 "pgrep -a kubelet"
I1228 07:26:53.698319    4202 config.go:182] Loaded profile config "auto-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zkmlp" [2f232afa-d4ff-4f96-b0c0-5d2c4f996f2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zkmlp" [2f232afa-d4ff-4f96-b0c0-5d2c4f996f2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004727868s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.903886862s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-dxg7w" [46b53bcd-99ba-4001-b6d7-651e03309478] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003935042s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-436830 "pgrep -a kubelet"
I1228 07:27:58.326441    4202 config.go:182] Loaded profile config "kindnet-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-xfw62" [5769e859-4de0-4b56-8ede-b261187a33d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-xfw62" [5769e859-4de0-4b56-8ede-b261187a33d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003072103s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (53.866896783s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-ngq8z" [a6b6011e-905b-43ab-a131-a70072055fe2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003948657s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-436830 "pgrep -a kubelet"
I1228 07:28:41.974923    4202 config.go:182] Loaded profile config "calico-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-436830 replace --force -f testdata/netcat-deployment.yaml
I1228 07:28:42.316579    4202 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-xn5gg" [516b14ca-917b-4508-9243-8936e15fe6d5] Pending
helpers_test.go:353: "netcat-5dd4ccdc4b-xn5gg" [516b14ca-917b-4508-9243-8936e15fe6d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-xn5gg" [516b14ca-917b-4508-9243-8936e15fe6d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004571796s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (71.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m11.661117431s)
--- PASS: TestNetworkPlugins/group/false/Start (71.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-436830 "pgrep -a kubelet"
I1228 07:29:29.233710    4202 config.go:182] Loaded profile config "custom-flannel-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7lcs7" [2b4078b8-463f-4d78-9572-3dd991f9c466] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7lcs7" [2b4078b8-463f-4d78-9572-3dd991f9c466] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00393681s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1228 07:30:27.660364    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (47.348174489s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-436830 "pgrep -a kubelet"
I1228 07:30:32.622127    4202 config.go:182] Loaded profile config "false-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-gkzxh" [fd846adf-0c5f-434a-945f-ec92d63a8d3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-gkzxh" [fd846adf-0c5f-434a-945f-ec92d63a8d3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.002877505s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-436830 "pgrep -a kubelet"
I1228 07:30:53.348913    4202 config.go:182] Loaded profile config "enable-default-cni-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-wqcgg" [fba8ec6c-fd2c-411a-9076-d3f41333a93e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-wqcgg" [fba8ec6c-fd2c-411a-9076-d3f41333a93e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004783136s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (55.203326019s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1228 07:31:45.222310    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.064060    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.069296    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.079567    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.099835    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.140079    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.220373    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.380731    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:54.701217    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:55.341443    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:56.621643    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:31:59.182709    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m11.601738182s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-r4rqd" [cc2c3d42-0be8-4461-b189-19b915e1f565] Running
E1228 07:32:04.302908    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003469142s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-436830 "pgrep -a kubelet"
I1228 07:32:08.270816    4202 config.go:182] Loaded profile config "flannel-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-436830 replace --force -f testdata/netcat-deployment.yaml
I1228 07:32:08.590034    4202 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-2bw46" [eca799a7-e638-464c-a8e7-9b1809c2e1ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-2bw46" [eca799a7-e638-464c-a8e7-9b1809c2e1ee] Running
E1228 07:32:14.543127    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00410063s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-436830 "pgrep -a kubelet"
I1228 07:32:41.625262    4202 config.go:182] Loaded profile config "bridge-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-47rb6" [c93c106f-5b73-405d-9971-8a1a45927530] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-47rb6" [c93c106f-5b73-405d-9971-8a1a45927530] Running
E1228 07:32:51.966735    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:51.971910    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:51.982178    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:52.002574    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:52.043234    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:52.124261    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:52.284906    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:52.605544    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:53.246331    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:32:54.526839    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006028307s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (73.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-436830 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m13.851544328s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (73.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.7s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-296443 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-296443 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.501699337s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-296443" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-296443
--- PASS: TestPreload/PreloadSrc/gcs (4.70s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (3.84s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-495694 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-495694 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (3.53288263s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-495694" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-495694
--- PASS: TestPreload/PreloadSrc/github (3.84s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.84s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-162019 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-162019" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-162019
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-772313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1228 07:33:32.928997    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.615189    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.620427    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.630664    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.651007    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.692166    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.772634    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:35.933023    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:36.253599    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:36.894508    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:38.174871    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:40.735906    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:45.857099    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:33:56.097399    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-772313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m0.225244254s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-436830 "pgrep -a kubelet"
I1228 07:33:56.610166    4202 config.go:182] Loaded profile config "kubenet-436830": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-436830 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-92rzt" [3aa3621e-5cfa-49c8-8810-83479b1c130d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-92rzt" [3aa3621e-5cfa-49c8-8810-83479b1c130d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003614367s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-436830 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-436830 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E1228 07:38:57.005606    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.011052    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.021436    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.041757    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.082127    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.162502    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.322874    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:57.643438    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:58.283779    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:59.564578    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:02.125521    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:03.302550    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:03.893104    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:07.252354    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:15.081688    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/skaffold-556203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:17.492883    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-772313 create -f testdata/busybox.yaml
E1228 07:34:29.783382    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [67bc02c3-6e5e-4179-8897-0fc0f4d94b22] Pending
E1228 07:34:30.103625    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [67bc02c3-6e5e-4179-8897-0fc0f4d94b22] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1228 07:34:32.024664    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:34:34.585077    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [67bc02c3-6e5e-4179-8897-0fc0f4d94b22] Running
E1228 07:34:37.906327    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:34:39.705928    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003792691s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-772313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-907105 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:34:30.744168    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-907105 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (47.634841112s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-772313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-772313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.44248541s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-772313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-772313 --alsologtostderr -v=3
E1228 07:34:49.947000    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-772313 --alsologtostderr -v=3: (11.749841268s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-772313 -n old-k8s-version-772313
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-772313 -n old-k8s-version-772313: exit status 7 (177.529229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-772313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-772313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1228 07:34:57.541659    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:10.427875    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-772313 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (53.065452921s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-772313 -n old-k8s-version-772313
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-907105 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [57ddc0ec-dece-4f11-9f84-ae99bea34fea] Pending
helpers_test.go:353: "busybox" [57ddc0ec-dece-4f11-9f84-ae99bea34fea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [57ddc0ec-dece-4f11-9f84-ae99bea34fea] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003443331s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-907105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-907105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1228 07:35:27.660817    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/addons-201219/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-907105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160016282s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-907105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-907105 --alsologtostderr -v=3
E1228 07:35:32.954567    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:32.959841    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:32.970120    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:32.990478    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:33.030692    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:33.110952    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:33.271328    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:33.591652    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:34.232571    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:35.513095    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:35.810444    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:38.073457    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-907105 --alsologtostderr -v=3: (11.492516443s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105: exit status 7 (70.919478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-907105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-907105 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:35:43.193698    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-907105 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (51.532758891s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-xzmpm" [144e03f0-44c1-4527-8473-f3401e0c5771] Running
E1228 07:35:51.388308    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.434293    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.763153    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.768427    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.778720    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.798956    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.839217    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:53.919509    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:54.080360    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:54.401517    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007479951s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-xzmpm" [144e03f0-44c1-4527-8473-f3401e0c5771] Running
E1228 07:35:55.042581    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:56.323059    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:35:58.883567    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004134571s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-772313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-772313 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-772313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-772313 --alsologtostderr -v=1: (1.181256548s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-772313 -n old-k8s-version-772313
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-772313 -n old-k8s-version-772313: exit status 2 (379.377028ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-772313 -n old-k8s-version-772313
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-772313 -n old-k8s-version-772313: exit status 2 (369.747461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-772313 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-772313 -n old-k8s-version-772313
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-772313 -n old-k8s-version-772313
E1228 07:36:04.004436    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-605602 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:36:13.914849    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:36:14.245434    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:36:19.462346    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-605602 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m10.965114481s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4q6td" [006bf853-1da3-441d-b01e-a51b6e6bd7b5] Running
E1228 07:36:34.726363    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003359627s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4q6td" [006bf853-1da3-441d-b01e-a51b6e6bd7b5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003234472s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-907105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-907105 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-907105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
E1228 07:36:45.221729    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/functional-723745/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105: exit status 2 (511.0015ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105: exit status 2 (343.111523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-907105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-907105 -n default-k8s-diff-port-907105
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-771052 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:36:54.063627    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:36:54.875135    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:01.948267    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:01.953911    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:01.964545    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:01.984981    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:02.025208    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:02.105472    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:02.265877    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:02.586666    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:03.227417    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:04.507909    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:07.068553    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:12.188769    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:13.308849    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:15.687192    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-771052 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (52.260915078s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-605602 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a0c6dbcc-f18d-4240-8ada-79c1e9ef9e46] Pending
helpers_test.go:353: "busybox" [a0c6dbcc-f18d-4240-8ada-79c1e9ef9e46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1228 07:37:21.747294    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/auto-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:22.429209    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a0c6dbcc-f18d-4240-8ada-79c1e9ef9e46] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004332972s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-605602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-605602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-605602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.480273111s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-605602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-605602 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-605602 --alsologtostderr -v=3: (11.340155093s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-605602 -n embed-certs-605602
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-605602 -n embed-certs-605602: exit status 7 (79.000404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-605602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-605602 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:37:41.971722    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:41.976993    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:41.987245    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:42.007525    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:42.047807    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:42.128065    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:42.288504    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:42.609338    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-605602 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (51.845130345s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-605602 -n embed-certs-605602
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-771052 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E1228 07:37:42.909758    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [01c4f3e7-a5e6-435b-8b55-2365f13d29e7] Pending
E1228 07:37:43.249684    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [01c4f3e7-a5e6-435b-8b55-2365f13d29e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1228 07:37:44.530596    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [01c4f3e7-a5e6-435b-8b55-2365f13d29e7] Running
E1228 07:37:47.091133    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004092424s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-771052 exec busybox -- /bin/sh -c "ulimit -n"
E1228 07:37:51.968173    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:37:52.211843    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-771052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-771052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.343704203s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-771052 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-771052 --alsologtostderr -v=3
E1228 07:38:02.452425    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-771052 --alsologtostderr -v=3: (11.917904782s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-771052 -n no-preload-771052
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-771052 -n no-preload-771052: exit status 7 (105.452925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-771052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (28.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-771052 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:38:16.795733    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/false-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:19.650941    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kindnet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:22.932882    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/bridge-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:23.869979    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-771052 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (28.19169594s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-771052 -n no-preload-771052
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (28.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-twdrn" [afdc35fb-a2e0-4199-8a3a-a949d35b8257] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004104411s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jkkdj" [32fe6e06-f4b0-4e96-a3a9-d725e78a3ae8] Running
E1228 07:38:35.615560    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/calico-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:38:37.608070    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/enable-default-cni-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003127434s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-twdrn" [afdc35fb-a2e0-4199-8a3a-a949d35b8257] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004083571s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-605602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jkkdj" [32fe6e06-f4b0-4e96-a3a9-d725e78a3ae8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003201478s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-771052 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-605602 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-605602 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-605602 -n embed-certs-605602
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-605602 -n embed-certs-605602: exit status 2 (426.342096ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-605602 -n embed-certs-605602
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-605602 -n embed-certs-605602: exit status 2 (437.854735ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-605602 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-605602 -n embed-certs-605602
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-605602 -n embed-certs-605602
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-771052 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-771052 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-771052 -n no-preload-771052
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-771052 -n no-preload-771052: exit status 2 (422.547392ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-771052 -n no-preload-771052
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-771052 -n no-preload-771052: exit status 2 (525.424548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-771052 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-771052 --alsologtostderr -v=1: (1.083458796s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-771052 -n no-preload-771052
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-771052 -n no-preload-771052
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-599074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-599074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (34.693534253s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-599074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-599074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.089450477s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-599074 --alsologtostderr -v=3
E1228 07:39:29.464945    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/custom-flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:29.958734    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:29.964119    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:29.974519    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:29.994873    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:30.054611    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:30.135534    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:30.296284    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:30.616830    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:31.257833    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:32.538869    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-599074 --alsologtostderr -v=3: (6.109727974s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-599074 -n newest-cni-599074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-599074 -n newest-cni-599074: exit status 7 (63.612025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-599074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-599074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1228 07:39:35.099120    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:37.973725    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/kubenet-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:40.219580    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:45.790959    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/flannel-436830/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1228 07:39:50.460073    4202 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/old-k8s-version-772313/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-599074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (16.603851364s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-599074 -n newest-cni-599074
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-599074 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-599074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-599074 -n newest-cni-599074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-599074 -n newest-cni-599074: exit status 2 (322.299584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-599074 -n newest-cni-599074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-599074 -n newest-cni-599074: exit status 2 (332.391667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-599074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-599074 -n newest-cni-599074
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-599074 -n newest-cni-599074
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-967998 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-967998" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-967998
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-436830 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-436830" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22352-2382/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 28 Dec 2025 07:05:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-575789
contexts:
- context:
cluster: offline-docker-575789
extensions:
- extension:
last-update: Sun, 28 Dec 2025 07:05:19 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-docker-575789
name: offline-docker-575789
current-context: offline-docker-575789
kind: Config
preferences: {}
users:
- name: offline-docker-575789
user:
client-certificate: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/offline-docker-575789/client.crt
client-key: /home/jenkins/minikube-integration/22352-2382/.minikube/profiles/offline-docker-575789/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-436830

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-436830" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-436830"

                                                
                                                
----------------------- debugLogs end: cilium-436830 [took: 3.843258181s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-436830" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-436830
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-361207" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-361207
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard