Test Report: Docker_Linux_docker_arm64 22353

                    
                      dccbb7bb926f2ef30a57d8898bfc971889daa155:2025-12-29:43039
                    
                

Test fail (2/352)

Order failed test Duration
52 TestForceSystemdFlag 507.17
53 TestForceSystemdEnv 508.37
x
+
TestForceSystemdFlag (507.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1229 07:25:02.580056  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:26:49.122709  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.282925  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.288210  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.298645  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.318865  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.359433  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.439876  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.600372  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:50.921018  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:51.561343  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:52.841971  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:27:55.402194  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:00.522803  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:10.763346  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:31.244147  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:28:46.070623  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:29:12.205907  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:30:02.582133  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:30:34.126680  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m22.901302907s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-136540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-136540" primary control-plane node in "force-systemd-flag-136540" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:24:31.862836  949749 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:24:31.863055  949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:31.863084  949749 out.go:374] Setting ErrFile to fd 2...
	I1229 07:24:31.863106  949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:31.863378  949749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:24:31.863845  949749 out.go:368] Setting JSON to false
	I1229 07:24:31.864812  949749 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14821,"bootTime":1766978251,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 07:24:31.864951  949749 start.go:143] virtualization:  
	I1229 07:24:31.867861  949749 out.go:179] * [force-systemd-flag-136540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:24:31.869825  949749 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:24:31.869885  949749 notify.go:221] Checking for updates...
	I1229 07:24:31.875448  949749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:24:31.878231  949749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 07:24:31.880884  949749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 07:24:31.883938  949749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:24:31.887027  949749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:24:31.890228  949749 config.go:182] Loaded profile config "force-systemd-env-262325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:31.890373  949749 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:24:31.923367  949749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:24:31.923482  949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:32.003280  949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:31.993283051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:32.003399  949749 docker.go:319] overlay module found
	I1229 07:24:32.006854  949749 out.go:179] * Using the docker driver based on user configuration
	I1229 07:24:32.009686  949749 start.go:309] selected driver: docker
	I1229 07:24:32.009709  949749 start.go:928] validating driver "docker" against <nil>
	I1229 07:24:32.009723  949749 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:24:32.010422  949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:32.093914  949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:32.084018482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:32.094069  949749 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:24:32.094295  949749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:24:32.097347  949749 out.go:179] * Using Docker driver with root privileges
	I1229 07:24:32.100108  949749 cni.go:84] Creating CNI manager for ""
	I1229 07:24:32.100218  949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:32.100231  949749 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 07:24:32.100307  949749 start.go:353] cluster config:
	{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:32.103339  949749 out.go:179] * Starting "force-systemd-flag-136540" primary control-plane node in "force-systemd-flag-136540" cluster
	I1229 07:24:32.106301  949749 cache.go:134] Beginning downloading kic base image for docker with docker
	I1229 07:24:32.109381  949749 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:24:32.112189  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:32.112257  949749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1229 07:24:32.112273  949749 cache.go:65] Caching tarball of preloaded images
	I1229 07:24:32.112370  949749 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:24:32.112387  949749 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:24:32.112504  949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
	I1229 07:24:32.112529  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json: {Name:mkd5ba600f81117204cfd1742166eccffeab192c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.112704  949749 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:24:32.142727  949749 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:24:32.142753  949749 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:24:32.142768  949749 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:24:32.142799  949749 start.go:360] acquireMachinesLock for force-systemd-flag-136540: {Name:mk4472157db195a18f5d219cb5373fd9e5bc1c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:24:32.142903  949749 start.go:364] duration metric: took 83.87µs to acquireMachinesLock for "force-systemd-flag-136540"
	I1229 07:24:32.142934  949749 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:24:32.143011  949749 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:24:32.146413  949749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:24:32.146645  949749 start.go:159] libmachine.API.Create for "force-systemd-flag-136540" (driver="docker")
	I1229 07:24:32.146676  949749 client.go:173] LocalClient.Create starting
	I1229 07:24:32.146732  949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
	I1229 07:24:32.146774  949749 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:32.146796  949749 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:32.146850  949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
	I1229 07:24:32.146881  949749 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:32.146896  949749 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:32.147267  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:24:32.184241  949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:24:32.184329  949749 network_create.go:284] running [docker network inspect force-systemd-flag-136540] to gather additional debugging logs...
	I1229 07:24:32.184347  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540
	W1229 07:24:32.202472  949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 returned with exit code 1
	I1229 07:24:32.202500  949749 network_create.go:287] error running [docker network inspect force-systemd-flag-136540]: docker network inspect force-systemd-flag-136540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-136540 not found
	I1229 07:24:32.202514  949749 network_create.go:289] output of [docker network inspect force-systemd-flag-136540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-136540 not found
	
	** /stderr **
	I1229 07:24:32.202606  949749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:32.225877  949749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
	I1229 07:24:32.226204  949749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
	I1229 07:24:32.226527  949749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
	I1229 07:24:32.226688  949749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-79307d27fbf3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:05:93:d6:4a:c7} reservation:<nil>}
	I1229 07:24:32.227128  949749 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58010}
	I1229 07:24:32.227147  949749 network_create.go:124] attempt to create docker network force-systemd-flag-136540 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:24:32.227210  949749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-136540 force-systemd-flag-136540
	I1229 07:24:32.293469  949749 network_create.go:108] docker network force-systemd-flag-136540 192.168.85.0/24 created
	I1229 07:24:32.293514  949749 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-136540" container
	I1229 07:24:32.293586  949749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:24:32.309969  949749 cli_runner.go:164] Run: docker volume create force-systemd-flag-136540 --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:24:32.342891  949749 oci.go:103] Successfully created a docker volume force-systemd-flag-136540
	I1229 07:24:32.343001  949749 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-136540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --entrypoint /usr/bin/test -v force-systemd-flag-136540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:24:32.956540  949749 oci.go:107] Successfully prepared a docker volume force-systemd-flag-136540
	I1229 07:24:32.956596  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:32.956607  949749 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:24:32.956681  949749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:24:36.453768  949749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.49703133s)
	I1229 07:24:36.453806  949749 kic.go:203] duration metric: took 3.497195297s to extract preloaded images to volume ...
	W1229 07:24:36.453940  949749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:24:36.454069  949749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:24:36.553908  949749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-136540 --name force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-136540 --network force-systemd-flag-136540 --ip 192.168.85.2 --volume force-systemd-flag-136540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:24:36.921885  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Running}}
	I1229 07:24:36.949531  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:36.977208  949749 cli_runner.go:164] Run: docker exec force-systemd-flag-136540 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:24:37.043401  949749 oci.go:144] the created container "force-systemd-flag-136540" has a running status.
	I1229 07:24:37.043446  949749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa...
	I1229 07:24:37.613435  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:24:37.613488  949749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:24:37.645753  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:37.677430  949749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:24:37.677450  949749 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-136540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:24:37.757532  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:37.783838  949749 machine.go:94] provisionDockerMachine start ...
	I1229 07:24:37.783940  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:37.816369  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:37.816708  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:37.816718  949749 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:24:37.817297  949749 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48280->127.0.0.1:33762: read: connection reset by peer
	I1229 07:24:40.967978  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
	
	I1229 07:24:40.968004  949749 ubuntu.go:182] provisioning hostname "force-systemd-flag-136540"
	I1229 07:24:40.968074  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:40.986787  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:40.987162  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:40.987185  949749 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-136540 && echo "force-systemd-flag-136540" | sudo tee /etc/hostname
	I1229 07:24:41.155636  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
	
	I1229 07:24:41.155724  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.177733  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.178031  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.178048  949749 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-136540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-136540/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-136540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:24:41.332316  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:24:41.332339  949749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
	I1229 07:24:41.332371  949749 ubuntu.go:190] setting up certificates
	I1229 07:24:41.332381  949749 provision.go:84] configureAuth start
	I1229 07:24:41.332439  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:41.349068  949749 provision.go:143] copyHostCerts
	I1229 07:24:41.349109  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:41.349165  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
	I1229 07:24:41.349180  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:41.349258  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
	I1229 07:24:41.349344  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:41.349367  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
	I1229 07:24:41.349374  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:41.349400  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
	I1229 07:24:41.349453  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:41.349475  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
	I1229 07:24:41.349480  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:41.349511  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
	I1229 07:24:41.349577  949749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-136540 san=[127.0.0.1 192.168.85.2 force-systemd-flag-136540 localhost minikube]
	I1229 07:24:41.546735  949749 provision.go:177] copyRemoteCerts
	I1229 07:24:41.546817  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:24:41.546861  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.566148  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:41.671926  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:24:41.672027  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:24:41.689940  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:24:41.690004  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:24:41.707708  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:24:41.707770  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:24:41.725505  949749 provision.go:87] duration metric: took 393.100381ms to configureAuth
	I1229 07:24:41.725531  949749 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:24:41.725728  949749 config.go:182] Loaded profile config "force-systemd-flag-136540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:41.725782  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.743373  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.743703  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.743713  949749 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:24:41.897630  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1229 07:24:41.897708  949749 ubuntu.go:71] root file system type: overlay
	I1229 07:24:41.897848  949749 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:24:41.897935  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.921519  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.921836  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.921950  949749 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:24:42.102668  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:24:42.102864  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:42.133688  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:42.134051  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:42.134080  949749 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:24:43.163247  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-29 07:24:42.093571384 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1229 07:24:43.163281  949749 machine.go:97] duration metric: took 5.379421515s to provisionDockerMachine
	I1229 07:24:43.163293  949749 client.go:176] duration metric: took 11.016607482s to LocalClient.Create
	I1229 07:24:43.163321  949749 start.go:167] duration metric: took 11.016676896s to libmachine.API.Create "force-systemd-flag-136540"
	I1229 07:24:43.163335  949749 start.go:293] postStartSetup for "force-systemd-flag-136540" (driver="docker")
	I1229 07:24:43.163345  949749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:24:43.163421  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:24:43.163475  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.181417  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.288488  949749 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:24:43.291782  949749 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:24:43.291809  949749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:24:43.291822  949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/addons for local assets ...
	I1229 07:24:43.291880  949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/files for local assets ...
	I1229 07:24:43.291954  949749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> 7250782.pem in /etc/ssl/certs
	I1229 07:24:43.291962  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /etc/ssl/certs/7250782.pem
	I1229 07:24:43.292057  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:24:43.299384  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:43.317036  949749 start.go:296] duration metric: took 153.684905ms for postStartSetup
	I1229 07:24:43.317451  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:43.335322  949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
	I1229 07:24:43.335607  949749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:24:43.335663  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.354609  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.461171  949749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:24:43.466044  949749 start.go:128] duration metric: took 11.323009959s to createHost
	I1229 07:24:43.466091  949749 start.go:83] releasing machines lock for "force-systemd-flag-136540", held for 11.323168174s
	I1229 07:24:43.466184  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:43.483271  949749 ssh_runner.go:195] Run: cat /version.json
	I1229 07:24:43.483331  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.483583  949749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:24:43.483648  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.504986  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.516239  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.695447  949749 ssh_runner.go:195] Run: systemctl --version
	I1229 07:24:43.701895  949749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:24:43.706075  949749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:24:43.706145  949749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:24:43.733426  949749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:24:43.733449  949749 start.go:496] detecting cgroup driver to use...
	I1229 07:24:43.733462  949749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:43.733554  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:43.747390  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:24:43.755747  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:24:43.764296  949749 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:24:43.764426  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:24:43.773285  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:43.782062  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:24:43.790627  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:43.799083  949749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:24:43.806872  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:24:43.815660  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:24:43.824501  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:24:43.833359  949749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:24:43.840707  949749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:24:43.847859  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:43.958912  949749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:24:44.059070  949749 start.go:496] detecting cgroup driver to use...
	I1229 07:24:44.059146  949749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:44.059226  949749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:24:44.075065  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:44.088639  949749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:24:44.122930  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:44.137375  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:24:44.155656  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:44.175473  949749 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:24:44.180371  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:24:44.190423  949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:24:44.205661  949749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:24:44.321544  949749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:24:44.440345  949749 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:24:44.440462  949749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:24:44.454047  949749 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:24:44.466753  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:44.579909  949749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:24:44.997772  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:24:45.025871  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:24:45.048256  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:45.067946  949749 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:24:45.246433  949749 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:24:45.394951  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:45.519551  949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:24:45.535811  949749 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:24:45.548627  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:45.673698  949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:24:45.747485  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:45.762101  949749 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:24:45.762224  949749 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:24:45.765988  949749 start.go:574] Will wait 60s for crictl version
	I1229 07:24:45.766089  949749 ssh_runner.go:195] Run: which crictl
	I1229 07:24:45.769514  949749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:24:45.795220  949749 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1229 07:24:45.795343  949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:45.817012  949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:45.845183  949749 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1229 07:24:45.845304  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:45.862014  949749 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:24:45.865896  949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:45.875964  949749 kubeadm.go:884] updating cluster {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:24:45.876083  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:45.876188  949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:45.893986  949749 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:45.894009  949749 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:24:45.894075  949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:45.911802  949749 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:45.911829  949749 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:24:45.911839  949749 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1229 07:24:45.911933  949749 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-136540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:24:45.912006  949749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:24:45.963834  949749 cni.go:84] Creating CNI manager for ""
	I1229 07:24:45.963864  949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:45.963922  949749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:24:45.963952  949749 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-136540 NodeName:force-systemd-flag-136540 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:24:45.964163  949749 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-136540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:24:45.964261  949749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:24:45.972065  949749 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:24:45.972197  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:24:45.979844  949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1229 07:24:45.992556  949749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:24:46.006552  949749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1229 07:24:46.020398  949749 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:24:46.024230  949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:46.035368  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:46.163494  949749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:24:46.184599  949749 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540 for IP: 192.168.85.2
	I1229 07:24:46.184618  949749 certs.go:195] generating shared ca certs ...
	I1229 07:24:46.184635  949749 certs.go:227] acquiring lock for ca certs: {Name:mk9c2ed6b225eba3a3b373f488351467f747c9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.184776  949749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key
	I1229 07:24:46.184825  949749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key
	I1229 07:24:46.184837  949749 certs.go:257] generating profile certs ...
	I1229 07:24:46.184891  949749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key
	I1229 07:24:46.184906  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt with IP's: []
	I1229 07:24:46.406421  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt ...
	I1229 07:24:46.406498  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt: {Name:mkeabcc81e93cc9bab177300f214aee09ffb34da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.406748  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key ...
	I1229 07:24:46.406796  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key: {Name:mk1d3be86290b8aa5c0871eada27f23610866e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.406948  949749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c
	I1229 07:24:46.407005  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:24:46.644365  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c ...
	I1229 07:24:46.644395  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c: {Name:mk20477dd3211295249f0fd8db3287c9ced07fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.644644  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c ...
	I1229 07:24:46.644661  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c: {Name:mk90a993a5735e7ecab2e7be38b0b8ea44299fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.644750  949749 certs.go:382] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt
	I1229 07:24:46.644835  949749 certs.go:386] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key
	I1229 07:24:46.644897  949749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key
	I1229 07:24:46.644913  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt with IP's: []
	I1229 07:24:47.026929  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt ...
	I1229 07:24:47.026978  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt: {Name:mk152d5d3beadbce81174a15f580235a4bfefeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:47.027179  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key ...
	I1229 07:24:47.027195  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key: {Name:mkd3178fa5a3e305677094e64826570746f84993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:47.027366  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:24:47.027396  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:24:47.027413  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:24:47.027428  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:24:47.027440  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:24:47.027462  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:24:47.027478  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:24:47.027488  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:24:47.027539  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem (1338 bytes)
	W1229 07:24:47.027580  949749 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078_empty.pem, impossibly tiny 0 bytes
	I1229 07:24:47.027593  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:24:47.027622  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:24:47.027655  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:24:47.027688  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem (1675 bytes)
	I1229 07:24:47.027736  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:47.027771  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.027789  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.027800  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem -> /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.028420  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:24:47.047819  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:24:47.066416  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:24:47.083760  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:24:47.100871  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:24:47.118300  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:24:47.135827  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:24:47.154223  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:24:47.171152  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /usr/share/ca-certificates/7250782.pem (1708 bytes)
	I1229 07:24:47.188424  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:24:47.204881  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem --> /usr/share/ca-certificates/725078.pem (1338 bytes)
	I1229 07:24:47.222920  949749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:24:47.236010  949749 ssh_runner.go:195] Run: openssl version
	I1229 07:24:47.242847  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.250549  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7250782.pem /etc/ssl/certs/7250782.pem
	I1229 07:24:47.257970  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.261605  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.261667  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.303672  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:47.311437  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7250782.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:47.319608  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.327019  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:24:47.334490  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.338076  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.338184  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.381190  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:24:47.388743  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:24:47.395955  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.403397  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/725078.pem /etc/ssl/certs/725078.pem
	I1229 07:24:47.410817  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.414638  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.414707  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.458494  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:24:47.465936  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/725078.pem /etc/ssl/certs/51391683.0
	I1229 07:24:47.473134  949749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:24:47.476718  949749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:24:47.476770  949749 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:47.476884  949749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:24:47.493620  949749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:24:47.502107  949749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:24:47.509981  949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:24:47.510046  949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:24:47.517804  949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:24:47.517825  949749 kubeadm.go:158] found existing configuration files:
	
	I1229 07:24:47.517877  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:24:47.525590  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:24:47.525674  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:24:47.532930  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:24:47.540396  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:24:47.540486  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:24:47.547676  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:24:47.555165  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:24:47.555256  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:24:47.562475  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:24:47.570046  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:24:47.570109  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:24:47.577347  949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:24:47.617344  949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:24:47.617407  949749 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:24:47.711675  949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:24:47.711830  949749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:24:47.711890  949749 kubeadm.go:319] OS: Linux
	I1229 07:24:47.711974  949749 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:24:47.712056  949749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:24:47.712162  949749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:24:47.712241  949749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:24:47.712321  949749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:24:47.712401  949749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:24:47.712480  949749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:24:47.712559  949749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:24:47.712639  949749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:24:47.783238  949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:24:47.783386  949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:24:47.783503  949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:24:47.800559  949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:24:47.807021  949749 out.go:252]   - Generating certificates and keys ...
	I1229 07:24:47.807150  949749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:24:47.807244  949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:24:48.391180  949749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:24:48.594026  949749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:24:48.825994  949749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:24:49.323806  949749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:24:49.506950  949749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:24:49.507188  949749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:24:49.719847  949749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:24:49.720093  949749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:24:50.129385  949749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:24:50.272350  949749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:24:50.704674  949749 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:24:50.705019  949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:24:51.089352  949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:24:51.167795  949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:24:51.380140  949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:24:51.696561  949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:24:51.802016  949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:24:51.802726  949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:24:51.805447  949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:24:51.809325  949749 out.go:252]   - Booting up control plane ...
	I1229 07:24:51.809441  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:24:51.809530  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:24:51.809609  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:24:51.825390  949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:24:51.825876  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:24:51.840218  949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:24:51.840883  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:24:51.841100  949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:24:51.986925  949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:24:51.987097  949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:28:51.986578  949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00003953s
	I1229 07:28:51.986615  949749 kubeadm.go:319] 
	I1229 07:28:51.986711  949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:28:51.986761  949749 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:28:51.986866  949749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:28:51.986874  949749 kubeadm.go:319] 
	I1229 07:28:51.986980  949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:28:51.987012  949749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:28:51.987044  949749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:28:51.987048  949749 kubeadm.go:319] 
	I1229 07:28:51.991310  949749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:51.991737  949749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:51.991851  949749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:28:51.992128  949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:28:51.992138  949749 kubeadm.go:319] 
	I1229 07:28:51.992206  949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:28:51.992360  949749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00003953s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00003953s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:28:51.992440  949749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:28:52.418971  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:28:52.431883  949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:28:52.431947  949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:28:52.439564  949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:28:52.439582  949749 kubeadm.go:158] found existing configuration files:
	
	I1229 07:28:52.439631  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:28:52.447231  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:28:52.447294  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:28:52.454516  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:28:52.462044  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:28:52.462110  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:28:52.469355  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:28:52.476888  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:28:52.476953  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:28:52.484710  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:28:52.492047  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:28:52.492108  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:28:52.499152  949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:28:52.615409  949749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:52.615841  949749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:52.688523  949749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:32:54.115439  949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:32:54.115477  949749 kubeadm.go:319] 
	I1229 07:32:54.115596  949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:32:54.120837  949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:32:54.120898  949749 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:32:54.120992  949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:32:54.121051  949749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:32:54.121090  949749 kubeadm.go:319] OS: Linux
	I1229 07:32:54.121140  949749 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:32:54.121192  949749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:32:54.121243  949749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:32:54.121296  949749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:32:54.121348  949749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:32:54.121401  949749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:32:54.121451  949749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:32:54.121504  949749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:32:54.121554  949749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:32:54.121630  949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:32:54.121728  949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:32:54.121822  949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:32:54.121888  949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:32:54.125605  949749 out.go:252]   - Generating certificates and keys ...
	I1229 07:32:54.125713  949749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:32:54.125788  949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:32:54.125933  949749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:32:54.126020  949749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:32:54.126096  949749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:32:54.126205  949749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:32:54.126299  949749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:32:54.126381  949749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:32:54.126493  949749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:32:54.126610  949749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:32:54.126674  949749 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:32:54.126770  949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:32:54.126842  949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:32:54.126914  949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:32:54.126977  949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:32:54.127061  949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:32:54.127149  949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:32:54.127304  949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:32:54.127382  949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:32:54.130556  949749 out.go:252]   - Booting up control plane ...
	I1229 07:32:54.130667  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:32:54.130752  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:32:54.130843  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:32:54.131025  949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:32:54.131132  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:32:54.131248  949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:32:54.131361  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:32:54.131431  949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:32:54.131608  949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:32:54.131770  949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:32:54.131847  949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000601427s
	I1229 07:32:54.131856  949749 kubeadm.go:319] 
	I1229 07:32:54.131930  949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:32:54.131984  949749 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:32:54.132174  949749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:32:54.132203  949749 kubeadm.go:319] 
	I1229 07:32:54.132356  949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:32:54.132424  949749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:32:54.132476  949749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:32:54.132539  949749 kubeadm.go:319] 
	I1229 07:32:54.132577  949749 kubeadm.go:403] duration metric: took 8m6.655800799s to StartCluster
	I1229 07:32:54.132629  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:32:54.132713  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:32:54.187548  949749 cri.go:96] found id: ""
	I1229 07:32:54.187637  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.187660  949749 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:32:54.187700  949749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:32:54.187803  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:32:54.217390  949749 cri.go:96] found id: ""
	I1229 07:32:54.217455  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.217487  949749 logs.go:284] No container was found matching "etcd"
	I1229 07:32:54.217507  949749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:32:54.217596  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:32:54.246434  949749 cri.go:96] found id: ""
	I1229 07:32:54.246518  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.246541  949749 logs.go:284] No container was found matching "coredns"
	I1229 07:32:54.246561  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:32:54.246672  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:32:54.279822  949749 cri.go:96] found id: ""
	I1229 07:32:54.279884  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.279914  949749 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:32:54.279933  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:32:54.280019  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:32:54.308671  949749 cri.go:96] found id: ""
	I1229 07:32:54.308750  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.308773  949749 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:32:54.308795  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:32:54.308901  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:32:54.350922  949749 cri.go:96] found id: ""
	I1229 07:32:54.350991  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.351031  949749 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:32:54.351058  949749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:32:54.351143  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:32:54.397671  949749 cri.go:96] found id: ""
	I1229 07:32:54.397733  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.397771  949749 logs.go:284] No container was found matching "kindnet"
	I1229 07:32:54.397801  949749 logs.go:123] Gathering logs for container status ...
	I1229 07:32:54.397849  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:32:54.472421  949749 logs.go:123] Gathering logs for kubelet ...
	I1229 07:32:54.472498  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:32:54.546509  949749 logs.go:123] Gathering logs for dmesg ...
	I1229 07:32:54.546588  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:32:54.562327  949749 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:32:54.562351  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:32:54.652514  949749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:54.644694    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.645425    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.646973    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.647305    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.648732    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:32:54.644694    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.645425    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.646973    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.647305    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.648732    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:32:54.652533  949749 logs.go:123] Gathering logs for Docker ...
	I1229 07:32:54.652544  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:32:54.684054  949749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:32:54.684224  949749 out.go:285] * 
	* 
	W1229 07:32:54.684342  949749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:54.684422  949749 out.go:285] * 
	* 
	W1229 07:32:54.684712  949749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:32:54.690679  949749 out.go:203] 
	W1229 07:32:54.697342  949749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:54.697393  949749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:32:54.697418  949749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:32:54.700360  949749 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-136540 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-29 07:32:55.288178505 +0000 UTC m=+2799.950051689
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-136540
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-136540:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf",
	        "Created": "2025-12-29T07:24:36.568532723Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 950337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:24:36.646493074Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/hosts",
	        "LogPath": "/var/lib/docker/containers/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf/a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf-json.log",
	        "Name": "/force-systemd-flag-136540",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-flag-136540:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-136540",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a72da115069c25210cbf5b2e47007d177de1a563ac39ebc21da0365615ad19bf",
	                "LowerDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2-init/diff:/var/lib/docker/overlay2/3788d7c7c8e91fd886b287c15675406ce26d741d5d808d18bcc9c345d38db92c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2102d3b457127c21dd80dafc7eb68e7f83bd4c0de295f9325829fe130feb96f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-136540",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-136540/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-136540",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-136540",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-136540",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "19dfc4568a1bf4473bce23b00c3cd841299796210ad317404f50560cf0e8d9f5",
	            "SandboxKey": "/var/run/docker/netns/19dfc4568a1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33762"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33763"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33766"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33764"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33765"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-136540": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:bc:b0:83:f6:6d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fb1612089e1045dee558dd90cf8f83fb667f1cf48f8746bd58486f63fc27afa",
	                    "EndpointID": "b10a779d4a752949bc78560da1067368878d70971db45727b187910848bc4948",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-136540",
	                        "a72da115069c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-136540 -n force-systemd-flag-136540
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-136540 -n force-systemd-flag-136540: exit status 6 (391.02333ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:32:55.687078  962194 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-136540" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-136540 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-728759 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo containerd config dump                                                                                                                                                                                                        │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo crio config                                                                                                                                                                                                                   │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ delete  │ -p cilium-728759                                                                                                                                                                                                                                    │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                                        │ force-systemd-env-262325  │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ stop    │ -p NoKubernetes-198702                                                                                                                                                                                                                              │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p NoKubernetes-198702 --driver=docker  --container-runtime=docker                                                                                                                                                                                  │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh     │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ delete  │ -p NoKubernetes-198702                                                                                                                                                                                                                              │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                                                                                                                       │ force-systemd-flag-136540 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ force-systemd-env-262325 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                 │ force-systemd-env-262325  │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
	│ delete  │ -p force-systemd-env-262325                                                                                                                                                                                                                         │ force-systemd-env-262325  │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
	│ start   │ -p docker-flags-139514 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ docker-flags-139514       │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │                     │
	│ ssh     │ force-systemd-flag-136540 ssh docker info --format {{.CgroupDriver}}                                                                                                                                                                                │ force-systemd-flag-136540 │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:32:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:32:44.762208  960427 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:32:44.762395  960427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:32:44.762429  960427 out.go:374] Setting ErrFile to fd 2...
	I1229 07:32:44.762449  960427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:32:44.762840  960427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:32:44.763428  960427 out.go:368] Setting JSON to false
	I1229 07:32:44.764429  960427 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15314,"bootTime":1766978251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 07:32:44.764547  960427 start.go:143] virtualization:  
	I1229 07:32:44.768188  960427 out.go:179] * [docker-flags-139514] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:32:44.772528  960427 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:32:44.772620  960427 notify.go:221] Checking for updates...
	I1229 07:32:44.778947  960427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:32:44.782241  960427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 07:32:44.785320  960427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 07:32:44.788315  960427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:32:44.791329  960427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:32:44.794843  960427 config.go:182] Loaded profile config "force-systemd-flag-136540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:32:44.794959  960427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:32:44.821843  960427 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:32:44.821993  960427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:32:44.881006  960427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:32:44.872206051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:32:44.881115  960427 docker.go:319] overlay module found
	I1229 07:32:44.884440  960427 out.go:179] * Using the docker driver based on user configuration
	I1229 07:32:44.887362  960427 start.go:309] selected driver: docker
	I1229 07:32:44.887379  960427 start.go:928] validating driver "docker" against <nil>
	I1229 07:32:44.887393  960427 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:32:44.888202  960427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:32:44.936485  960427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-29 07:32:44.927869991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:32:44.936638  960427 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:32:44.936864  960427 start_flags.go:1014] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1229 07:32:44.939829  960427 out.go:179] * Using Docker driver with root privileges
	I1229 07:32:44.942698  960427 cni.go:84] Creating CNI manager for ""
	I1229 07:32:44.942772  960427 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:32:44.942786  960427 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 07:32:44.942862  960427 start.go:353] cluster config:
	{Name:docker-flags-139514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-139514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:32:44.946038  960427 out.go:179] * Starting "docker-flags-139514" primary control-plane node in "docker-flags-139514" cluster
	I1229 07:32:44.948914  960427 cache.go:134] Beginning downloading kic base image for docker with docker
	I1229 07:32:44.951839  960427 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:32:44.954666  960427 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:32:44.954725  960427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1229 07:32:44.954738  960427 cache.go:65] Caching tarball of preloaded images
	I1229 07:32:44.954763  960427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:32:44.954823  960427 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:32:44.954834  960427 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:32:44.954947  960427 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/docker-flags-139514/config.json ...
	I1229 07:32:44.954964  960427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/docker-flags-139514/config.json: {Name:mk7699ffe52c13d2bb58206a9cb556baefbeb6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:32:44.974861  960427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:32:44.974883  960427 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:32:44.974898  960427 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:32:44.974982  960427 start.go:360] acquireMachinesLock for docker-flags-139514: {Name:mk2ea4414d7cf67a9e64fe0d2913f314c869f3a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:32:44.975169  960427 start.go:364] duration metric: took 126.643µs to acquireMachinesLock for "docker-flags-139514"
	I1229 07:32:44.975211  960427 start.go:93] Provisioning new machine with config: &{Name:docker-flags-139514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:docker-flags-139514 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:32:44.975341  960427 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:32:44.979442  960427 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:32:44.979683  960427 start.go:159] libmachine.API.Create for "docker-flags-139514" (driver="docker")
	I1229 07:32:44.979720  960427 client.go:173] LocalClient.Create starting
	I1229 07:32:44.979810  960427 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
	I1229 07:32:44.979856  960427 main.go:144] libmachine: Decoding PEM data...
	I1229 07:32:44.979874  960427 main.go:144] libmachine: Parsing certificate...
	I1229 07:32:44.979928  960427 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
	I1229 07:32:44.979950  960427 main.go:144] libmachine: Decoding PEM data...
	I1229 07:32:44.979962  960427 main.go:144] libmachine: Parsing certificate...
	I1229 07:32:44.980355  960427 cli_runner.go:164] Run: docker network inspect docker-flags-139514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:32:44.996046  960427 cli_runner.go:211] docker network inspect docker-flags-139514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:32:44.996154  960427 network_create.go:284] running [docker network inspect docker-flags-139514] to gather additional debugging logs...
	I1229 07:32:44.996176  960427 cli_runner.go:164] Run: docker network inspect docker-flags-139514
	W1229 07:32:45.041911  960427 cli_runner.go:211] docker network inspect docker-flags-139514 returned with exit code 1
	I1229 07:32:45.041946  960427 network_create.go:287] error running [docker network inspect docker-flags-139514]: docker network inspect docker-flags-139514: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network docker-flags-139514 not found
	I1229 07:32:45.041960  960427 network_create.go:289] output of [docker network inspect docker-flags-139514]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network docker-flags-139514 not found
	
	** /stderr **
	I1229 07:32:45.042085  960427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:32:45.066171  960427 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
	I1229 07:32:45.066628  960427 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
	I1229 07:32:45.067089  960427 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
	I1229 07:32:45.067744  960427 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3f050}
	I1229 07:32:45.067849  960427 network_create.go:124] attempt to create docker network docker-flags-139514 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:32:45.067931  960427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=docker-flags-139514 docker-flags-139514
	I1229 07:32:45.171086  960427 network_create.go:108] docker network docker-flags-139514 192.168.76.0/24 created
	I1229 07:32:45.171205  960427 kic.go:121] calculated static IP "192.168.76.2" for the "docker-flags-139514" container
	I1229 07:32:45.171366  960427 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:32:45.195384  960427 cli_runner.go:164] Run: docker volume create docker-flags-139514 --label name.minikube.sigs.k8s.io=docker-flags-139514 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:32:45.246652  960427 oci.go:103] Successfully created a docker volume docker-flags-139514
	I1229 07:32:45.246758  960427 cli_runner.go:164] Run: docker run --rm --name docker-flags-139514-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-139514 --entrypoint /usr/bin/test -v docker-flags-139514:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:32:45.758171  960427 oci.go:107] Successfully prepared a docker volume docker-flags-139514
	I1229 07:32:45.758248  960427 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:32:45.758263  960427 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:32:45.758336  960427 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-139514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:32:49.061698  960427 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v docker-flags-139514:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.303321006s)
	I1229 07:32:49.061731  960427 kic.go:203] duration metric: took 3.303465593s to extract preloaded images to volume ...
	W1229 07:32:49.061878  960427 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:32:49.061996  960427 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:32:49.123610  960427 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname docker-flags-139514 --name docker-flags-139514 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=docker-flags-139514 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=docker-flags-139514 --network docker-flags-139514 --ip 192.168.76.2 --volume docker-flags-139514:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:32:49.433568  960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Running}}
	I1229 07:32:49.454050  960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
	I1229 07:32:49.473339  960427 cli_runner.go:164] Run: docker exec docker-flags-139514 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:32:49.524175  960427 oci.go:144] the created container "docker-flags-139514" has a running status.
	I1229 07:32:49.524202  960427 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa...
	I1229 07:32:49.618990  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:32:49.619078  960427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:32:49.642637  960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
	I1229 07:32:49.662599  960427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:32:49.662623  960427 kic_runner.go:114] Args: [docker exec --privileged docker-flags-139514 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:32:49.718041  960427 cli_runner.go:164] Run: docker container inspect docker-flags-139514 --format={{.State.Status}}
	I1229 07:32:49.750101  960427 machine.go:94] provisionDockerMachine start ...
	I1229 07:32:49.750208  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:54.115439  949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:32:54.115477  949749 kubeadm.go:319] 
	I1229 07:32:54.115596  949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:32:54.120837  949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:32:54.120898  949749 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:32:54.120992  949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:32:54.121051  949749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:32:54.121090  949749 kubeadm.go:319] OS: Linux
	I1229 07:32:54.121140  949749 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:32:54.121192  949749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:32:54.121243  949749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:32:54.121296  949749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:32:54.121348  949749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:32:54.121401  949749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:32:54.121451  949749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:32:54.121504  949749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:32:54.121554  949749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:32:54.121630  949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:32:54.121728  949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:32:54.121822  949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:32:54.121888  949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:32:54.125605  949749 out.go:252]   - Generating certificates and keys ...
	I1229 07:32:54.125713  949749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:32:54.125788  949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:32:54.125933  949749 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:32:54.126020  949749 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:32:54.126096  949749 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:32:54.126205  949749 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:32:54.126299  949749 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:32:54.126381  949749 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:32:54.126493  949749 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:32:54.126610  949749 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:32:54.126674  949749 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:32:54.126770  949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:32:54.126842  949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:32:54.126914  949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:32:54.126977  949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:32:54.127061  949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:32:54.127149  949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:32:54.127304  949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:32:54.127382  949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:32:54.130556  949749 out.go:252]   - Booting up control plane ...
	I1229 07:32:54.130667  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:32:54.130752  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:32:54.130843  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:32:54.131025  949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:32:54.131132  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:32:54.131248  949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:32:54.131361  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:32:54.131431  949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:32:54.131608  949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:32:54.131770  949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:32:54.131847  949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000601427s
	I1229 07:32:54.131856  949749 kubeadm.go:319] 
	I1229 07:32:54.131930  949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:32:54.131984  949749 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:32:54.132174  949749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:32:54.132203  949749 kubeadm.go:319] 
	I1229 07:32:54.132356  949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:32:54.132424  949749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:32:54.132476  949749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:32:54.132539  949749 kubeadm.go:319] 
	I1229 07:32:54.132577  949749 kubeadm.go:403] duration metric: took 8m6.655800799s to StartCluster
	I1229 07:32:54.132629  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:32:54.132713  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:32:54.187548  949749 cri.go:96] found id: ""
	I1229 07:32:54.187637  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.187660  949749 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:32:54.187700  949749 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:32:54.187803  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:32:54.217390  949749 cri.go:96] found id: ""
	I1229 07:32:54.217455  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.217487  949749 logs.go:284] No container was found matching "etcd"
	I1229 07:32:54.217507  949749 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:32:54.217596  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:32:54.246434  949749 cri.go:96] found id: ""
	I1229 07:32:54.246518  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.246541  949749 logs.go:284] No container was found matching "coredns"
	I1229 07:32:54.246561  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:32:54.246672  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:32:54.279822  949749 cri.go:96] found id: ""
	I1229 07:32:54.279884  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.279914  949749 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:32:54.279933  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:32:54.280019  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:32:54.308671  949749 cri.go:96] found id: ""
	I1229 07:32:54.308750  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.308773  949749 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:32:54.308795  949749 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:32:54.308901  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:32:54.350922  949749 cri.go:96] found id: ""
	I1229 07:32:54.350991  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.351031  949749 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:32:54.351058  949749 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:32:54.351143  949749 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:32:54.397671  949749 cri.go:96] found id: ""
	I1229 07:32:54.397733  949749 logs.go:282] 0 containers: []
	W1229 07:32:54.397771  949749 logs.go:284] No container was found matching "kindnet"
	I1229 07:32:54.397801  949749 logs.go:123] Gathering logs for container status ...
	I1229 07:32:54.397849  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:32:54.472421  949749 logs.go:123] Gathering logs for kubelet ...
	I1229 07:32:54.472498  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:32:54.546509  949749 logs.go:123] Gathering logs for dmesg ...
	I1229 07:32:54.546588  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1229 07:32:54.562327  949749 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:32:54.562351  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:32:54.652514  949749 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:54.644694    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.645425    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.646973    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.647305    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.648732    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:32:54.644694    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.645425    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.646973    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.647305    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:54.648732    5528 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:32:54.652533  949749 logs.go:123] Gathering logs for Docker ...
	I1229 07:32:54.652544  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1229 07:32:54.684054  949749 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:32:54.684224  949749 out.go:285] * 
	W1229 07:32:54.684342  949749 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:54.684422  949749 out.go:285] * 
	W1229 07:32:54.684712  949749 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:32:54.690679  949749 out.go:203] 
	W1229 07:32:54.697342  949749 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000601427s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:54.697393  949749 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:32:54.697418  949749 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:32:54.700360  949749 out.go:203] 
	I1229 07:32:49.777926  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:49.778263  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:49.778284  960427 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:32:49.778947  960427 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57796->127.0.0.1:33767: read: connection reset by peer
	I1229 07:32:52.935920  960427 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-139514
	
	I1229 07:32:52.936023  960427 ubuntu.go:182] provisioning hostname "docker-flags-139514"
	I1229 07:32:52.936149  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:52.956245  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:52.956561  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:52.956573  960427 main.go:144] libmachine: About to run SSH command:
	sudo hostname docker-flags-139514 && echo "docker-flags-139514" | sudo tee /etc/hostname
	I1229 07:32:53.121598  960427 main.go:144] libmachine: SSH cmd err, output: <nil>: docker-flags-139514
	
	I1229 07:32:53.121714  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:53.139000  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:53.139310  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:53.139332  960427 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-139514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-139514/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-139514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:32:53.288265  960427 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:32:53.288294  960427 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
	I1229 07:32:53.288324  960427 ubuntu.go:190] setting up certificates
	I1229 07:32:53.288332  960427 provision.go:84] configureAuth start
	I1229 07:32:53.288391  960427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" docker-flags-139514
	I1229 07:32:53.306771  960427 provision.go:143] copyHostCerts
	I1229 07:32:53.306813  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:32:53.306846  960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
	I1229 07:32:53.306852  960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:32:53.306927  960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
	I1229 07:32:53.307010  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:32:53.307027  960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
	I1229 07:32:53.307031  960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:32:53.307056  960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
	I1229 07:32:53.307109  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:32:53.307124  960427 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
	I1229 07:32:53.307128  960427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:32:53.307151  960427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
	I1229 07:32:53.307247  960427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.docker-flags-139514 san=[127.0.0.1 192.168.76.2 docker-flags-139514 localhost minikube]
	I1229 07:32:53.539981  960427 provision.go:177] copyRemoteCerts
	I1229 07:32:53.540058  960427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:32:53.540100  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:53.559039  960427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33767 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/docker-flags-139514/id_rsa Username:docker}
	I1229 07:32:53.664881  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:32:53.664935  960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1229 07:32:53.684088  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:32:53.684159  960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1229 07:32:53.703237  960427 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:32:53.703311  960427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:32:53.720953  960427 provision.go:87] duration metric: took 432.606921ms to configureAuth
	I1229 07:32:53.720982  960427 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:32:53.721179  960427 config.go:182] Loaded profile config "docker-flags-139514": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:32:53.721236  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:53.738495  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:53.738807  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:53.738823  960427 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:32:53.892861  960427 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1229 07:32:53.892884  960427 ubuntu.go:71] root file system type: overlay
	I1229 07:32:53.893003  960427 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:32:53.893076  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:53.911085  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:53.911405  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:53.911494  960427 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:32:54.074542  960427 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:32:54.074627  960427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" docker-flags-139514
	I1229 07:32:54.092272  960427 main.go:144] libmachine: Using SSH client type: native
	I1229 07:32:54.092591  960427 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33767 <nil> <nil>}
	I1229 07:32:54.092617  960427 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	
	
	==> Docker <==
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.740692368Z" level=info msg="Restoring containers: start."
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.760577327Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.780620402Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.958396126Z" level=info msg="Loading containers: done."
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969350611Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969416940Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.969457808Z" level=info msg="Initializing buildkit"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.989268915Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994540173Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994639945Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994713280Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:24:44 force-systemd-flag-136540 dockerd[1141]: time="2025-12-29T07:24:44.994836535Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:24:44 force-systemd-flag-136540 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 29 07:24:45 force-systemd-flag-136540 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:24:45 force-systemd-flag-136540 cri-dockerd[1423]: time="2025-12-29T07:24:45Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:24:45 force-systemd-flag-136540 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:56.418933    5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:56.419911    5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:56.421860    5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:56.422503    5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:56.424768    5658 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec29 06:14] hrtimer: interrupt took 41514710 ns
	[Dec29 06:33] kauditd_printk_skb: 8 callbacks suppressed
	[Dec29 06:45] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:32:56 up  4:15,  0 user,  load average: 0.55, 0.81, 1.73
	Linux force-systemd-flag-136540 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:32:52 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:53 force-systemd-flag-136540 kubelet[5437]: E1229 07:32:53.697259    5437 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:53 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:54 force-systemd-flag-136540 kubelet[5505]: E1229 07:32:54.458505    5505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:54 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:55 force-systemd-flag-136540 kubelet[5542]: E1229 07:32:55.239966    5542 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:55 force-systemd-flag-136540 kubelet[5582]: E1229 07:32:55.985819    5582 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:55 force-systemd-flag-136540 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-136540 -n force-systemd-flag-136540
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-136540 -n force-systemd-flag-136540: exit status 6 (485.116757ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:32:57.011227  962640 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-136540" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-136540" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-136540" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-136540
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-136540: (1.947086817s)
--- FAIL: TestForceSystemdFlag (507.17s)

                                                
                                    
x
+
TestForceSystemdEnv (508.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: exit status 109 (8m24.603751038s)

                                                
                                                
-- stdout --
	* [force-systemd-env-262325] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-262325" primary control-plane node in "force-systemd-env-262325" cluster
	* Pulling base image v0.0.48-1766979815-22353 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:24:16.413170  944930 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:24:16.413392  944930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:16.413420  944930 out.go:374] Setting ErrFile to fd 2...
	I1229 07:24:16.413440  944930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:16.413731  944930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:24:16.414195  944930 out.go:368] Setting JSON to false
	I1229 07:24:16.415225  944930 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14806,"bootTime":1766978251,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 07:24:16.415322  944930 start.go:143] virtualization:  
	I1229 07:24:16.422974  944930 out.go:179] * [force-systemd-env-262325] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:24:16.427023  944930 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:24:16.427287  944930 notify.go:221] Checking for updates...
	I1229 07:24:16.433043  944930 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:24:16.435866  944930 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 07:24:16.438686  944930 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 07:24:16.441520  944930 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:24:16.444375  944930 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1229 07:24:16.447700  944930 config.go:182] Loaded profile config "NoKubernetes-198702": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1229 07:24:16.447832  944930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:24:16.491699  944930 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:24:16.491804  944930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:16.595447  944930 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:16.584535869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:16.595597  944930 docker.go:319] overlay module found
	I1229 07:24:16.598900  944930 out.go:179] * Using the docker driver based on user configuration
	I1229 07:24:16.601689  944930 start.go:309] selected driver: docker
	I1229 07:24:16.601707  944930 start.go:928] validating driver "docker" against <nil>
	I1229 07:24:16.601721  944930 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:24:16.602438  944930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:16.696908  944930 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:16.685654511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:16.697060  944930 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:24:16.697285  944930 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:24:16.700419  944930 out.go:179] * Using Docker driver with root privileges
	I1229 07:24:16.703376  944930 cni.go:84] Creating CNI manager for ""
	I1229 07:24:16.703458  944930 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:16.703469  944930 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 07:24:16.703553  944930 start.go:353] cluster config:
	{Name:force-systemd-env-262325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:16.706741  944930 out.go:179] * Starting "force-systemd-env-262325" primary control-plane node in "force-systemd-env-262325" cluster
	I1229 07:24:16.709580  944930 cache.go:134] Beginning downloading kic base image for docker with docker
	I1229 07:24:16.712544  944930 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:24:16.715424  944930 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:16.715474  944930 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1229 07:24:16.715484  944930 cache.go:65] Caching tarball of preloaded images
	I1229 07:24:16.715595  944930 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:24:16.715612  944930 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:24:16.715732  944930 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/config.json ...
	I1229 07:24:16.715755  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/config.json: {Name:mka529c0207d611d54118ea79f3c4d7fb332032d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:16.715920  944930 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:24:16.746414  944930 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:24:16.746439  944930 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:24:16.746453  944930 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:24:16.746485  944930 start.go:360] acquireMachinesLock for force-systemd-env-262325: {Name:mk12a1851310bc89cb5586ef94e29c31e660cb77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:24:16.746590  944930 start.go:364] duration metric: took 83.272µs to acquireMachinesLock for "force-systemd-env-262325"
	I1229 07:24:16.746626  944930 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-262325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:24:16.746695  944930 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:24:16.750163  944930 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:24:16.750408  944930 start.go:159] libmachine.API.Create for "force-systemd-env-262325" (driver="docker")
	I1229 07:24:16.750441  944930 client.go:173] LocalClient.Create starting
	I1229 07:24:16.750520  944930 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
	I1229 07:24:16.750559  944930 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:16.750578  944930 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:16.750635  944930 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
	I1229 07:24:16.750658  944930 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:16.750678  944930 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:16.751094  944930 cli_runner.go:164] Run: docker network inspect force-systemd-env-262325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:24:16.768012  944930 cli_runner.go:211] docker network inspect force-systemd-env-262325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:24:16.768094  944930 network_create.go:284] running [docker network inspect force-systemd-env-262325] to gather additional debugging logs...
	I1229 07:24:16.768374  944930 cli_runner.go:164] Run: docker network inspect force-systemd-env-262325
	W1229 07:24:16.785726  944930 cli_runner.go:211] docker network inspect force-systemd-env-262325 returned with exit code 1
	I1229 07:24:16.785769  944930 network_create.go:287] error running [docker network inspect force-systemd-env-262325]: docker network inspect force-systemd-env-262325: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-262325 not found
	I1229 07:24:16.785784  944930 network_create.go:289] output of [docker network inspect force-systemd-env-262325]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-262325 not found
	
	** /stderr **
	I1229 07:24:16.785879  944930 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:16.804893  944930 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
	I1229 07:24:16.805216  944930 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
	I1229 07:24:16.805489  944930 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
	I1229 07:24:16.805887  944930 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ee6e0}
	I1229 07:24:16.805905  944930 network_create.go:124] attempt to create docker network force-systemd-env-262325 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1229 07:24:16.805964  944930 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-262325 force-systemd-env-262325
	I1229 07:24:16.895879  944930 network_create.go:108] docker network force-systemd-env-262325 192.168.76.0/24 created
	I1229 07:24:16.895908  944930 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-262325" container
	I1229 07:24:16.895993  944930 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:24:16.916915  944930 cli_runner.go:164] Run: docker volume create force-systemd-env-262325 --label name.minikube.sigs.k8s.io=force-systemd-env-262325 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:24:16.939144  944930 oci.go:103] Successfully created a docker volume force-systemd-env-262325
	I1229 07:24:16.939219  944930 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-262325-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-262325 --entrypoint /usr/bin/test -v force-systemd-env-262325:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:24:17.474246  944930 oci.go:107] Successfully prepared a docker volume force-systemd-env-262325
	I1229 07:24:17.474320  944930 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:17.474335  944930 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:24:17.474414  944930 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-262325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:24:21.080191  944930 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-262325:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.60573521s)
	I1229 07:24:21.080226  944930 kic.go:203] duration metric: took 3.605886862s to extract preloaded images to volume ...
	W1229 07:24:21.080361  944930 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:24:21.080466  944930 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:24:21.160365  944930 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-262325 --name force-systemd-env-262325 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-262325 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-262325 --network force-systemd-env-262325 --ip 192.168.76.2 --volume force-systemd-env-262325:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:24:21.529308  944930 cli_runner.go:164] Run: docker container inspect force-systemd-env-262325 --format={{.State.Running}}
	I1229 07:24:21.553690  944930 cli_runner.go:164] Run: docker container inspect force-systemd-env-262325 --format={{.State.Status}}
	I1229 07:24:21.583130  944930 cli_runner.go:164] Run: docker exec force-systemd-env-262325 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:24:21.640505  944930 oci.go:144] the created container "force-systemd-env-262325" has a running status.
	I1229 07:24:21.640535  944930 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa...
	I1229 07:24:22.537829  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:24:22.537883  944930 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:24:22.563925  944930 cli_runner.go:164] Run: docker container inspect force-systemd-env-262325 --format={{.State.Status}}
	I1229 07:24:22.585900  944930 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:24:22.585919  944930 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-262325 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:24:22.650082  944930 cli_runner.go:164] Run: docker container inspect force-systemd-env-262325 --format={{.State.Status}}
	I1229 07:24:22.668058  944930 machine.go:94] provisionDockerMachine start ...
	I1229 07:24:22.668162  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:22.684666  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:22.684997  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:22.685012  944930 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:24:22.685651  944930 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35842->127.0.0.1:33752: read: connection reset by peer
	I1229 07:24:25.847589  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-262325
	
	I1229 07:24:25.847659  944930 ubuntu.go:182] provisioning hostname "force-systemd-env-262325"
	I1229 07:24:25.847756  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:25.865202  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:25.865518  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:25.865539  944930 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-262325 && echo "force-systemd-env-262325" | sudo tee /etc/hostname
	I1229 07:24:26.039943  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-262325
	
	I1229 07:24:26.040028  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:26.064980  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:26.065299  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:26.065327  944930 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-262325' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-262325/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-262325' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:24:26.232073  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:24:26.232098  944930 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
	I1229 07:24:26.232145  944930 ubuntu.go:190] setting up certificates
	I1229 07:24:26.232155  944930 provision.go:84] configureAuth start
	I1229 07:24:26.232217  944930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-262325
	I1229 07:24:26.248731  944930 provision.go:143] copyHostCerts
	I1229 07:24:26.248780  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:26.248812  944930 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
	I1229 07:24:26.248822  944930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:26.248883  944930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
	I1229 07:24:26.248976  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:26.248995  944930 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
	I1229 07:24:26.249000  944930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:26.249022  944930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
	I1229 07:24:26.249077  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:26.249102  944930 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
	I1229 07:24:26.249113  944930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:26.249135  944930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
	I1229 07:24:26.249196  944930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-262325 san=[127.0.0.1 192.168.76.2 force-systemd-env-262325 localhost minikube]
	I1229 07:24:26.480829  944930 provision.go:177] copyRemoteCerts
	I1229 07:24:26.480922  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:24:26.480996  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:26.498404  944930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa Username:docker}
	I1229 07:24:26.606880  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:24:26.606934  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:24:26.625641  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:24:26.625709  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:24:26.646628  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:24:26.646693  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:24:26.664972  944930 provision.go:87] duration metric: took 432.804259ms to configureAuth
	I1229 07:24:26.664996  944930 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:24:26.665159  944930 config.go:182] Loaded profile config "force-systemd-env-262325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:26.665213  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:26.681931  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:26.682240  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:26.682255  944930 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:24:26.848611  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1229 07:24:26.848641  944930 ubuntu.go:71] root file system type: overlay
	I1229 07:24:26.848796  944930 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:24:26.848865  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:26.867473  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:26.867792  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:26.867867  944930 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:24:27.049271  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:24:27.049356  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:27.072030  944930 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:27.072410  944930 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33752 <nil> <nil>}
	I1229 07:24:27.072435  944930 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:24:28.217988  944930 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-29 07:24:27.041978880 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1229 07:24:28.218020  944930 machine.go:97] duration metric: took 5.549938326s to provisionDockerMachine
	I1229 07:24:28.218033  944930 client.go:176] duration metric: took 11.467583115s to LocalClient.Create
	I1229 07:24:28.218045  944930 start.go:167] duration metric: took 11.467639638s to libmachine.API.Create "force-systemd-env-262325"
	I1229 07:24:28.218053  944930 start.go:293] postStartSetup for "force-systemd-env-262325" (driver="docker")
	I1229 07:24:28.218063  944930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:24:28.218130  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:24:28.218173  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:28.238577  944930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa Username:docker}
	I1229 07:24:28.352356  944930 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:24:28.356190  944930 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:24:28.356269  944930 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:24:28.356293  944930 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/addons for local assets ...
	I1229 07:24:28.356371  944930 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/files for local assets ...
	I1229 07:24:28.356475  944930 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> 7250782.pem in /etc/ssl/certs
	I1229 07:24:28.356490  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /etc/ssl/certs/7250782.pem
	I1229 07:24:28.356600  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:24:28.363962  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:28.382405  944930 start.go:296] duration metric: took 164.337558ms for postStartSetup
	I1229 07:24:28.382835  944930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-262325
	I1229 07:24:28.399819  944930 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/config.json ...
	I1229 07:24:28.400157  944930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:24:28.400229  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:28.416704  944930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa Username:docker}
	I1229 07:24:28.527651  944930 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:24:28.534024  944930 start.go:128] duration metric: took 11.787315932s to createHost
	I1229 07:24:28.534052  944930 start.go:83] releasing machines lock for "force-systemd-env-262325", held for 11.787446554s
	I1229 07:24:28.534129  944930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-262325
	I1229 07:24:28.556155  944930 ssh_runner.go:195] Run: cat /version.json
	I1229 07:24:28.556219  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:28.556404  944930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:24:28.556471  944930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-262325
	I1229 07:24:28.587933  944930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa Username:docker}
	I1229 07:24:28.595758  944930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33752 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-env-262325/id_rsa Username:docker}
	I1229 07:24:28.821449  944930 ssh_runner.go:195] Run: systemctl --version
	I1229 07:24:28.828211  944930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:24:28.833073  944930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:24:28.833138  944930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:24:28.879677  944930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:24:28.879750  944930 start.go:496] detecting cgroup driver to use...
	I1229 07:24:28.879782  944930 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:28.879911  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:28.897702  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:24:28.907204  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:24:28.917655  944930 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:24:28.917782  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:24:28.927699  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:28.937404  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:24:28.946354  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:28.962878  944930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:24:28.976683  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:24:28.990963  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:24:28.999705  944930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:24:29.018982  944930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:24:29.027903  944930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:24:29.036622  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:29.170190  944930 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:24:29.287532  944930 start.go:496] detecting cgroup driver to use...
	I1229 07:24:29.287614  944930 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:29.287706  944930 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:24:29.302246  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:29.316004  944930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:24:29.346559  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:29.364015  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:24:29.381727  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:29.400953  944930 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:24:29.406511  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:24:29.415905  944930 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:24:29.430273  944930 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:24:29.580992  944930 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:24:29.740330  944930 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:24:29.740484  944930 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:24:29.762397  944930 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:24:29.780448  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:29.929933  944930 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:24:30.492536  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:24:30.508842  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:24:30.532211  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:30.553458  944930 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:24:30.729154  944930 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:24:30.899172  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:31.020002  944930 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:24:31.035862  944930 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:24:31.049278  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:31.164093  944930 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:24:31.231617  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:31.244997  944930 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:24:31.245064  944930 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:24:31.249543  944930 start.go:574] Will wait 60s for crictl version
	I1229 07:24:31.249616  944930 ssh_runner.go:195] Run: which crictl
	I1229 07:24:31.253103  944930 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:24:31.278333  944930 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1229 07:24:31.278404  944930 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:31.299555  944930 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:31.326156  944930 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1229 07:24:31.326266  944930 cli_runner.go:164] Run: docker network inspect force-systemd-env-262325 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:31.342228  944930 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1229 07:24:31.346040  944930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:31.355355  944930 kubeadm.go:884] updating cluster {Name:force-systemd-env-262325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:24:31.355470  944930 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:31.355531  944930 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:31.373507  944930 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:31.373530  944930 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:24:31.373592  944930 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:31.392960  944930 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:31.392988  944930 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:24:31.392999  944930 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 docker true true} ...
	I1229 07:24:31.393091  944930 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-env-262325 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:24:31.393165  944930 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:24:31.451962  944930 cni.go:84] Creating CNI manager for ""
	I1229 07:24:31.451986  944930 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:31.452009  944930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:24:31.452040  944930 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-262325 NodeName:force-systemd-env-262325 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:24:31.452229  944930 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-262325"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:24:31.452305  944930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:24:31.461880  944930 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:24:31.461941  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:24:31.474828  944930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1229 07:24:31.491344  944930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:24:31.516045  944930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1229 07:24:31.534916  944930 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:24:31.541199  944930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:31.555928  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:31.701569  944930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:24:31.732726  944930 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325 for IP: 192.168.76.2
	I1229 07:24:31.732745  944930 certs.go:195] generating shared ca certs ...
	I1229 07:24:31.732761  944930 certs.go:227] acquiring lock for ca certs: {Name:mk9c2ed6b225eba3a3b373f488351467f747c9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.732891  944930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key
	I1229 07:24:31.732945  944930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key
	I1229 07:24:31.732953  944930 certs.go:257] generating profile certs ...
	I1229 07:24:31.733010  944930 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key
	I1229 07:24:31.733029  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt with IP's: []
	I1229 07:24:31.953441  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt ...
	I1229 07:24:31.953472  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt: {Name:mk7ad4183464c1603a35b27bdbd4890929661959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.953818  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key ...
	I1229 07:24:31.953839  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key: {Name:mkd5d334ba5e2cf232af9f4d5bf18b7dec8d3a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.954033  944930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4
	I1229 07:24:31.954047  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:24:32.258888  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 ...
	I1229 07:24:32.258928  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4: {Name:mk87011ab0bed5ba73ec28c5d64c96cd19c39c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.259158  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4 ...
	I1229 07:24:32.259174  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4: {Name:mk0553d075da44c50ec39ad2f7b5454d7e76c551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.259375  944930 certs.go:382] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt
	I1229 07:24:32.259473  944930 certs.go:386] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4 -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key
	I1229 07:24:32.259530  944930 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key
	I1229 07:24:32.259544  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt with IP's: []
	I1229 07:24:32.894493  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt ...
	I1229 07:24:32.894526  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt: {Name:mkea3e1531a154c0a2292c3bbccd2b7269934447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.894723  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key ...
	I1229 07:24:32.894741  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key: {Name:mk657ce2532a9437383ed9e5a68f0b2eaee3be8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.894831  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:24:32.894856  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:24:32.894870  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:24:32.894890  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:24:32.894905  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:24:32.894918  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:24:32.894934  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:24:32.894948  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:24:32.895006  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem (1338 bytes)
	W1229 07:24:32.895050  944930 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078_empty.pem, impossibly tiny 0 bytes
	I1229 07:24:32.895063  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:24:32.895100  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:24:32.895131  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:24:32.895160  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem (1675 bytes)
	I1229 07:24:32.895207  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:32.895241  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /usr/share/ca-certificates/7250782.pem
	I1229 07:24:32.895258  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:32.895270  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem -> /usr/share/ca-certificates/725078.pem
	I1229 07:24:32.895795  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:24:32.926379  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:24:32.955113  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:24:32.987173  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:24:33.011680  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:24:33.043980  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:24:33.072516  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:24:33.090944  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:24:33.122221  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /usr/share/ca-certificates/7250782.pem (1708 bytes)
	I1229 07:24:33.154515  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:24:33.180628  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem --> /usr/share/ca-certificates/725078.pem (1338 bytes)
	I1229 07:24:33.200935  944930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:24:33.215557  944930 ssh_runner.go:195] Run: openssl version
	I1229 07:24:33.222469  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.230833  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/725078.pem /etc/ssl/certs/725078.pem
	I1229 07:24:33.238816  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.242921  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.242995  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.284737  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:24:33.293366  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/725078.pem /etc/ssl/certs/51391683.0
	I1229 07:24:33.301788  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.310355  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7250782.pem /etc/ssl/certs/7250782.pem
	I1229 07:24:33.318737  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.323030  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.323096  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.365678  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:33.374535  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7250782.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:33.383381  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.406146  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:24:33.419003  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.425102  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.425168  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.507380  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:24:33.520230  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:24:33.529069  944930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:24:33.533699  944930 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:24:33.533753  944930 kubeadm.go:401] StartCluster: {Name:force-systemd-env-262325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:33.533870  944930 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:24:33.551612  944930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:24:33.562198  944930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:24:33.570906  944930 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:24:33.570983  944930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:24:33.582835  944930 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:24:33.582858  944930 kubeadm.go:158] found existing configuration files:
	
	I1229 07:24:33.582910  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:24:33.592306  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:24:33.592373  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:24:33.600692  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:24:33.609835  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:24:33.609910  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:24:33.618261  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:24:33.627413  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:24:33.627481  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:24:33.635696  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:24:33.644794  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:24:33.644866  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:24:33.653205  944930 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:24:33.698620  944930 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:24:33.698983  944930 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:24:33.823957  944930 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:24:33.824034  944930 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:24:33.824075  944930 kubeadm.go:319] OS: Linux
	I1229 07:24:33.824135  944930 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:24:33.824190  944930 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:24:33.824242  944930 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:24:33.824294  944930 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:24:33.824354  944930 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:24:33.824417  944930 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:24:33.824470  944930 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:24:33.824522  944930 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:24:33.824574  944930 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:24:33.908773  944930 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:24:33.908888  944930 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:24:33.908984  944930 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:24:33.927744  944930 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:24:33.936038  944930 out.go:252]   - Generating certificates and keys ...
	I1229 07:24:33.936185  944930 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:24:33.936257  944930 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:24:34.036786  944930 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:24:34.214394  944930 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:24:34.820220  944930 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:24:34.908055  944930 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:24:35.039736  944930 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:24:35.039913  944930 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:24:35.131702  944930 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:24:35.132041  944930 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:24:35.982814  944930 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:24:36.150386  944930 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:24:36.342238  944930 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:24:36.342596  944930 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:24:36.535398  944930 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:24:36.920847  944930 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:24:37.020449  944930 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:24:37.199538  944930 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:24:37.864933  944930 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:24:37.865043  944930 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:24:37.866599  944930 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:24:37.871433  944930 out.go:252]   - Booting up control plane ...
	I1229 07:24:37.871535  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:24:37.871619  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:24:37.872637  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:24:37.889844  944930 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:24:37.890657  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:24:37.898413  944930 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:24:37.899217  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:24:37.899595  944930 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:24:38.054089  944930 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:24:38.054212  944930 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:28:38.051085  944930 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001227018s
	I1229 07:28:38.051157  944930 kubeadm.go:319] 
	I1229 07:28:38.051223  944930 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:28:38.051300  944930 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:28:38.051410  944930 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:28:38.051416  944930 kubeadm.go:319] 
	I1229 07:28:38.051527  944930 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:28:38.051563  944930 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:28:38.051605  944930 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:28:38.051610  944930 kubeadm.go:319] 
	I1229 07:28:38.058849  944930 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:38.059443  944930 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:38.059598  944930 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:28:38.059869  944930 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:28:38.059880  944930 kubeadm.go:319] 
	I1229 07:28:38.059960  944930 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:28:38.060093  944930 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001227018s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001227018s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:28:38.060199  944930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:28:38.479926  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:28:38.493494  944930 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:28:38.493596  944930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:28:38.502170  944930 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:28:38.502188  944930 kubeadm.go:158] found existing configuration files:
	
	I1229 07:28:38.502240  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:28:38.510027  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:28:38.510099  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:28:38.517614  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:28:38.525160  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:28:38.525235  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:28:38.532408  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:28:38.539798  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:28:38.539863  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:28:38.547721  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:28:38.556239  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:28:38.556317  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:28:38.563562  944930 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:28:38.600473  944930 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:28:38.600536  944930 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:28:38.697404  944930 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:28:38.697480  944930 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:28:38.697521  944930 kubeadm.go:319] OS: Linux
	I1229 07:28:38.697573  944930 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:28:38.697627  944930 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:28:38.697678  944930 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:28:38.697731  944930 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:28:38.697782  944930 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:28:38.697834  944930 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:28:38.697884  944930 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:28:38.697936  944930 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:28:38.697985  944930 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:28:38.775235  944930 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:28:38.775358  944930 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:28:38.775462  944930 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:28:38.787352  944930 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:28:38.793014  944930 out.go:252]   - Generating certificates and keys ...
	I1229 07:28:38.793177  944930 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:28:38.793288  944930 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:28:38.793407  944930 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:28:38.793528  944930 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:28:38.793634  944930 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:28:38.793741  944930 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:28:38.793846  944930 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:28:38.793963  944930 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:28:38.794050  944930 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:28:38.794131  944930 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:28:38.794181  944930 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:28:38.794240  944930 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:28:38.945013  944930 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:28:39.365721  944930 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:28:39.923785  944930 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:28:40.126540  944930 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:28:40.369067  944930 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:28:40.369710  944930 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:28:40.372351  944930 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:28:40.375721  944930 out.go:252]   - Booting up control plane ...
	I1229 07:28:40.375827  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:28:40.375917  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:28:40.375990  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:28:40.395241  944930 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:28:40.395355  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:28:40.403534  944930 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:28:40.406687  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:28:40.406741  944930 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:28:40.530422  944930 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:28:40.530543  944930 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:32:40.530696  944930 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288867s
	I1229 07:32:40.530728  944930 kubeadm.go:319] 
	I1229 07:32:40.530784  944930 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:32:40.530824  944930 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:32:40.530957  944930 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:32:40.530973  944930 kubeadm.go:319] 
	I1229 07:32:40.531080  944930 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:32:40.531114  944930 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:32:40.531145  944930 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:32:40.531149  944930 kubeadm.go:319] 
	I1229 07:32:40.535677  944930 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:32:40.536136  944930 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:32:40.536249  944930 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:32:40.536484  944930 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:32:40.536490  944930 kubeadm.go:319] 
	I1229 07:32:40.536559  944930 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:32:40.536616  944930 kubeadm.go:403] duration metric: took 8m7.002868565s to StartCluster
	I1229 07:32:40.536662  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:32:40.536722  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:32:40.576172  944930 cri.go:96] found id: ""
	I1229 07:32:40.576210  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.576220  944930 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:32:40.576226  944930 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:32:40.576289  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:32:40.601170  944930 cri.go:96] found id: ""
	I1229 07:32:40.601196  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.601205  944930 logs.go:284] No container was found matching "etcd"
	I1229 07:32:40.601212  944930 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:32:40.601271  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:32:40.627437  944930 cri.go:96] found id: ""
	I1229 07:32:40.627475  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.627484  944930 logs.go:284] No container was found matching "coredns"
	I1229 07:32:40.627490  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:32:40.627547  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:32:40.653898  944930 cri.go:96] found id: ""
	I1229 07:32:40.653926  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.653935  944930 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:32:40.653942  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:32:40.654002  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:32:40.679278  944930 cri.go:96] found id: ""
	I1229 07:32:40.679303  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.679312  944930 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:32:40.679320  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:32:40.679376  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:32:40.704607  944930 cri.go:96] found id: ""
	I1229 07:32:40.704633  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.704642  944930 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:32:40.704650  944930 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:32:40.704706  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:32:40.729518  944930 cri.go:96] found id: ""
	I1229 07:32:40.729540  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.729548  944930 logs.go:284] No container was found matching "kindnet"
	I1229 07:32:40.729560  944930 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:32:40.729573  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:32:40.797226  944930 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:40.789353    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.789964    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791459    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791927    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.793356    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:32:40.789353    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.789964    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791459    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791927    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.793356    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:32:40.797248  944930 logs.go:123] Gathering logs for Docker ...
	I1229 07:32:40.797262  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 07:32:40.820013  944930 logs.go:123] Gathering logs for container status ...
	I1229 07:32:40.820050  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:32:40.847953  944930 logs.go:123] Gathering logs for kubelet ...
	I1229 07:32:40.847981  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:32:40.905225  944930 logs.go:123] Gathering logs for dmesg ...
	I1229 07:32:40.905262  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1229 07:32:40.923718  944930 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:32:40.923764  944930 out.go:285] * 
	* 
	W1229 07:32:40.923815  944930 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:40.923831  944930 out.go:285] * 
	* 
	W1229 07:32:40.924083  944930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:32:40.930857  944930 out.go:203] 
	W1229 07:32:40.934565  944930 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:40.934610  944930 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:32:40.934630  944930 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:32:40.937710  944930 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker" : exit status 109
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-262325 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-12-29 07:32:41.375369325 +0000 UTC m=+2786.037242452
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-262325
helpers_test.go:244: (dbg) docker inspect force-systemd-env-262325:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609",
	        "Created": "2025-12-29T07:24:21.185923284Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 946087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-29T07:24:21.27553479Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b9f008732615ab177a9385fb47e8e8b6783b73758be2e5f7e791427d50517cf7",
	        "ResolvConfPath": "/var/lib/docker/containers/4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609/hostname",
	        "HostsPath": "/var/lib/docker/containers/4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609/hosts",
	        "LogPath": "/var/lib/docker/containers/4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609/4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609-json.log",
	        "Name": "/force-systemd-env-262325",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-262325:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-262325",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4a655f927b6cb1bfaa7bdd0523cc6bdb5db51afd44fddaa3267364ac84bdb609",
	                "LowerDir": "/var/lib/docker/overlay2/7e25f3ee821b74c1dcd82cb94b2cd2fc41adf029f064c4084b202c1efdb4ccad-init/diff:/var/lib/docker/overlay2/3788d7c7c8e91fd886b287c15675406ce26d741d5d808d18bcc9c345d38db92c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7e25f3ee821b74c1dcd82cb94b2cd2fc41adf029f064c4084b202c1efdb4ccad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7e25f3ee821b74c1dcd82cb94b2cd2fc41adf029f064c4084b202c1efdb4ccad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7e25f3ee821b74c1dcd82cb94b2cd2fc41adf029f064c4084b202c1efdb4ccad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-262325",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-262325/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-262325",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-262325",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-262325",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89cf62ef548e1dde9d5d8ebf55e352565175f088c1368252e85b6035c3a4bca7",
	            "SandboxKey": "/var/run/docker/netns/89cf62ef548e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33752"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33753"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33756"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33754"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33755"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-262325": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:b4:92:85:a5:49",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79307d27fbf3d5b7cf5a9c7c8d53443fd85506dc934f840773440474eb302913",
	                    "EndpointID": "ded93a83b976f66e144e4da7d65bd30af7cd6871e1a0ca11fcf9d0b256bd3637",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-262325",
	                        "4a655f927b6c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-262325 -n force-systemd-env-262325
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-262325 -n force-systemd-env-262325: exit status 6 (331.116937ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:32:41.709075  959848 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-262325" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-262325 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                             ARGS                                                              │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-728759 sudo cat /etc/docker/daemon.json                                                                             │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo docker system info                                                                                      │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl status cri-docker --all --full --no-pager                                                     │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl cat cri-docker --no-pager                                                                     │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                          │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cri-dockerd --version                                                                                   │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl status containerd --all --full --no-pager                                                     │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl cat containerd --no-pager                                                                     │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /lib/systemd/system/containerd.service                                                              │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo cat /etc/containerd/config.toml                                                                         │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo containerd config dump                                                                                  │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl status crio --all --full --no-pager                                                           │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo systemctl cat crio --no-pager                                                                           │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                 │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p cilium-728759 sudo crio config                                                                                             │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ delete  │ -p cilium-728759                                                                                                              │ cilium-728759             │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p force-systemd-env-262325 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker                  │ force-systemd-env-262325  │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet                                                       │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ stop    │ -p NoKubernetes-198702                                                                                                        │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p NoKubernetes-198702 --driver=docker  --container-runtime=docker                                                            │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ ssh     │ -p NoKubernetes-198702 sudo systemctl is-active --quiet service kubelet                                                       │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ delete  │ -p NoKubernetes-198702                                                                                                        │ NoKubernetes-198702       │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │ 29 Dec 25 07:24 UTC │
	│ start   │ -p force-systemd-flag-136540 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker │ force-systemd-flag-136540 │ jenkins │ v1.37.0 │ 29 Dec 25 07:24 UTC │                     │
	│ ssh     │ force-systemd-env-262325 ssh docker info --format {{.CgroupDriver}}                                                           │ force-systemd-env-262325  │ jenkins │ v1.37.0 │ 29 Dec 25 07:32 UTC │ 29 Dec 25 07:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 07:24:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 07:24:31.862836  949749 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:24:31.863055  949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:31.863084  949749 out.go:374] Setting ErrFile to fd 2...
	I1229 07:24:31.863106  949749 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:24:31.863378  949749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:24:31.863845  949749 out.go:368] Setting JSON to false
	I1229 07:24:31.864812  949749 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14821,"bootTime":1766978251,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 07:24:31.864951  949749 start.go:143] virtualization:  
	I1229 07:24:31.867861  949749 out.go:179] * [force-systemd-flag-136540] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 07:24:31.869825  949749 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 07:24:31.869885  949749 notify.go:221] Checking for updates...
	I1229 07:24:31.875448  949749 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 07:24:31.878231  949749 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 07:24:31.880884  949749 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 07:24:31.883938  949749 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 07:24:31.887027  949749 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 07:24:31.890228  949749 config.go:182] Loaded profile config "force-systemd-env-262325": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:31.890373  949749 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 07:24:31.923367  949749 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 07:24:31.923482  949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:32.003280  949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:31.993283051 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:32.003399  949749 docker.go:319] overlay module found
	I1229 07:24:32.006854  949749 out.go:179] * Using the docker driver based on user configuration
	I1229 07:24:32.009686  949749 start.go:309] selected driver: docker
	I1229 07:24:32.009709  949749 start.go:928] validating driver "docker" against <nil>
	I1229 07:24:32.009723  949749 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 07:24:32.010422  949749 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:24:32.093914  949749 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 07:24:32.084018482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:24:32.094069  949749 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 07:24:32.094295  949749 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 07:24:32.097347  949749 out.go:179] * Using Docker driver with root privileges
	I1229 07:24:32.100108  949749 cni.go:84] Creating CNI manager for ""
	I1229 07:24:32.100218  949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:32.100231  949749 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 07:24:32.100307  949749 start.go:353] cluster config:
	{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:32.103339  949749 out.go:179] * Starting "force-systemd-flag-136540" primary control-plane node in "force-systemd-flag-136540" cluster
	I1229 07:24:32.106301  949749 cache.go:134] Beginning downloading kic base image for docker with docker
	I1229 07:24:32.109381  949749 out.go:179] * Pulling base image v0.0.48-1766979815-22353 ...
	I1229 07:24:32.112189  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:32.112257  949749 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
	I1229 07:24:32.112273  949749 cache.go:65] Caching tarball of preloaded images
	I1229 07:24:32.112370  949749 preload.go:251] Found /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1229 07:24:32.112387  949749 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on docker
	I1229 07:24:32.112504  949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
	I1229 07:24:32.112529  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json: {Name:mkd5ba600f81117204cfd1742166eccffeab192c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.112704  949749 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 07:24:32.142727  949749 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon, skipping pull
	I1229 07:24:32.142753  949749 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 exists in daemon, skipping load
	I1229 07:24:32.142768  949749 cache.go:243] Successfully downloaded all kic artifacts
	I1229 07:24:32.142799  949749 start.go:360] acquireMachinesLock for force-systemd-flag-136540: {Name:mk4472157db195a18f5d219cb5373fd9e5bc1c32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1229 07:24:32.142903  949749 start.go:364] duration metric: took 83.87µs to acquireMachinesLock for "force-systemd-flag-136540"
	I1229 07:24:32.142934  949749 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1229 07:24:32.143011  949749 start.go:125] createHost starting for "" (driver="docker")
	I1229 07:24:31.451962  944930 cni.go:84] Creating CNI manager for ""
	I1229 07:24:31.451986  944930 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:31.452009  944930 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:24:31.452040  944930 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-262325 NodeName:force-systemd-env-262325 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:24:31.452229  944930 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-env-262325"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:24:31.452305  944930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:24:31.461880  944930 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:24:31.461941  944930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:24:31.474828  944930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1229 07:24:31.491344  944930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:24:31.516045  944930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1229 07:24:31.534916  944930 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:24:31.541199  944930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:31.555928  944930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:31.701569  944930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:24:31.732726  944930 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325 for IP: 192.168.76.2
	I1229 07:24:31.732745  944930 certs.go:195] generating shared ca certs ...
	I1229 07:24:31.732761  944930 certs.go:227] acquiring lock for ca certs: {Name:mk9c2ed6b225eba3a3b373f488351467f747c9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.732891  944930 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key
	I1229 07:24:31.732945  944930 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key
	I1229 07:24:31.732953  944930 certs.go:257] generating profile certs ...
	I1229 07:24:31.733010  944930 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key
	I1229 07:24:31.733029  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt with IP's: []
	I1229 07:24:31.953441  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt ...
	I1229 07:24:31.953472  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.crt: {Name:mk7ad4183464c1603a35b27bdbd4890929661959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.953818  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key ...
	I1229 07:24:31.953839  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/client.key: {Name:mkd5d334ba5e2cf232af9f4d5bf18b7dec8d3a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:31.954033  944930 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4
	I1229 07:24:31.954047  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1229 07:24:32.258888  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 ...
	I1229 07:24:32.258928  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4: {Name:mk87011ab0bed5ba73ec28c5d64c96cd19c39c98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.259158  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4 ...
	I1229 07:24:32.259174  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4: {Name:mk0553d075da44c50ec39ad2f7b5454d7e76c551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.259375  944930 certs.go:382] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt.480d78b4 -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt
	I1229 07:24:32.259473  944930 certs.go:386] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key.480d78b4 -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key
	I1229 07:24:32.259530  944930 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key
	I1229 07:24:32.259544  944930 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt with IP's: []
	I1229 07:24:32.894493  944930 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt ...
	I1229 07:24:32.894526  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt: {Name:mkea3e1531a154c0a2292c3bbccd2b7269934447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.894723  944930 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key ...
	I1229 07:24:32.894741  944930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key: {Name:mk657ce2532a9437383ed9e5a68f0b2eaee3be8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:32.894831  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:24:32.894856  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:24:32.894870  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:24:32.894890  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:24:32.894905  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:24:32.894918  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:24:32.894934  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:24:32.894948  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:24:32.895006  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem (1338 bytes)
	W1229 07:24:32.895050  944930 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078_empty.pem, impossibly tiny 0 bytes
	I1229 07:24:32.895063  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:24:32.895100  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:24:32.895131  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:24:32.895160  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem (1675 bytes)
	I1229 07:24:32.895207  944930 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:32.895241  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /usr/share/ca-certificates/7250782.pem
	I1229 07:24:32.895258  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:32.895270  944930 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem -> /usr/share/ca-certificates/725078.pem
	I1229 07:24:32.895795  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:24:32.926379  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:24:32.955113  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:24:32.987173  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:24:33.011680  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:24:33.043980  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:24:33.072516  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:24:33.090944  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-env-262325/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1229 07:24:33.122221  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /usr/share/ca-certificates/7250782.pem (1708 bytes)
	I1229 07:24:33.154515  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:24:33.180628  944930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem --> /usr/share/ca-certificates/725078.pem (1338 bytes)
	I1229 07:24:33.200935  944930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:24:33.215557  944930 ssh_runner.go:195] Run: openssl version
	I1229 07:24:33.222469  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.230833  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/725078.pem /etc/ssl/certs/725078.pem
	I1229 07:24:33.238816  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.242921  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.242995  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/725078.pem
	I1229 07:24:33.284737  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:24:33.293366  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/725078.pem /etc/ssl/certs/51391683.0
	I1229 07:24:33.301788  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.310355  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7250782.pem /etc/ssl/certs/7250782.pem
	I1229 07:24:33.318737  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.323030  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.323096  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7250782.pem
	I1229 07:24:33.365678  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:33.374535  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7250782.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:33.383381  944930 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.406146  944930 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:24:33.419003  944930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.425102  944930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.425168  944930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:33.507380  944930 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:24:33.520230  944930 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:24:33.529069  944930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:24:33.533699  944930 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:24:33.533753  944930 kubeadm.go:401] StartCluster: {Name:force-systemd-env-262325 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-262325 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:33.533870  944930 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:24:33.551612  944930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:24:33.562198  944930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:24:33.570906  944930 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:24:33.570983  944930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:24:33.582835  944930 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:24:33.582858  944930 kubeadm.go:158] found existing configuration files:
	
	I1229 07:24:33.582910  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:24:33.592306  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:24:33.592373  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:24:33.600692  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:24:33.609835  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:24:33.609910  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:24:33.618261  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:24:33.627413  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:24:33.627481  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:24:33.635696  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:24:33.644794  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:24:33.644866  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:24:33.653205  944930 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:24:33.698620  944930 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:24:33.698983  944930 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:24:33.823957  944930 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:24:33.824034  944930 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:24:33.824075  944930 kubeadm.go:319] OS: Linux
	I1229 07:24:33.824135  944930 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:24:33.824190  944930 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:24:33.824242  944930 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:24:33.824294  944930 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:24:33.824354  944930 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:24:33.824417  944930 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:24:33.824470  944930 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:24:33.824522  944930 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:24:33.824574  944930 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:24:33.908773  944930 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:24:33.908888  944930 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:24:33.908984  944930 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:24:33.927744  944930 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:24:33.936038  944930 out.go:252]   - Generating certificates and keys ...
	I1229 07:24:33.936185  944930 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:24:33.936257  944930 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:24:34.036786  944930 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:24:34.214394  944930 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:24:34.820220  944930 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:24:34.908055  944930 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:24:35.039736  944930 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:24:35.039913  944930 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:24:35.131702  944930 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:24:35.132041  944930 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1229 07:24:35.982814  944930 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:24:36.150386  944930 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:24:36.342238  944930 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:24:36.342596  944930 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:24:32.146413  949749 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1229 07:24:32.146645  949749 start.go:159] libmachine.API.Create for "force-systemd-flag-136540" (driver="docker")
	I1229 07:24:32.146676  949749 client.go:173] LocalClient.Create starting
	I1229 07:24:32.146732  949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem
	I1229 07:24:32.146774  949749 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:32.146796  949749 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:32.146850  949749 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem
	I1229 07:24:32.146881  949749 main.go:144] libmachine: Decoding PEM data...
	I1229 07:24:32.146896  949749 main.go:144] libmachine: Parsing certificate...
	I1229 07:24:32.147267  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1229 07:24:32.184241  949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1229 07:24:32.184329  949749 network_create.go:284] running [docker network inspect force-systemd-flag-136540] to gather additional debugging logs...
	I1229 07:24:32.184347  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540
	W1229 07:24:32.202472  949749 cli_runner.go:211] docker network inspect force-systemd-flag-136540 returned with exit code 1
	I1229 07:24:32.202500  949749 network_create.go:287] error running [docker network inspect force-systemd-flag-136540]: docker network inspect force-systemd-flag-136540: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-136540 not found
	I1229 07:24:32.202514  949749 network_create.go:289] output of [docker network inspect force-systemd-flag-136540]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-136540 not found
	
	** /stderr **
	I1229 07:24:32.202606  949749 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:32.225877  949749 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
	I1229 07:24:32.226204  949749 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e5c59511c8c6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:c4:8e:57:d6:4a} reservation:<nil>}
	I1229 07:24:32.226527  949749 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-857d67da440f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:bc:86:0f:2c:21} reservation:<nil>}
	I1229 07:24:32.226688  949749 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-79307d27fbf3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:05:93:d6:4a:c7} reservation:<nil>}
	I1229 07:24:32.227128  949749 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a58010}
	I1229 07:24:32.227147  949749 network_create.go:124] attempt to create docker network force-systemd-flag-136540 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1229 07:24:32.227210  949749 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-136540 force-systemd-flag-136540
	I1229 07:24:32.293469  949749 network_create.go:108] docker network force-systemd-flag-136540 192.168.85.0/24 created
	I1229 07:24:32.293514  949749 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-136540" container
	I1229 07:24:32.293586  949749 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1229 07:24:32.309969  949749 cli_runner.go:164] Run: docker volume create force-systemd-flag-136540 --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true
	I1229 07:24:32.342891  949749 oci.go:103] Successfully created a docker volume force-systemd-flag-136540
	I1229 07:24:32.343001  949749 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-136540-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --entrypoint /usr/bin/test -v force-systemd-flag-136540:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -d /var/lib
	I1229 07:24:32.956540  949749 oci.go:107] Successfully prepared a docker volume force-systemd-flag-136540
	I1229 07:24:32.956596  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:32.956607  949749 kic.go:194] Starting extracting preloaded images to volume ...
	I1229 07:24:32.956681  949749 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir
	I1229 07:24:36.453768  949749 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-136540:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 -I lz4 -xf /preloaded.tar -C /extractDir: (3.49703133s)
	I1229 07:24:36.453806  949749 kic.go:203] duration metric: took 3.497195297s to extract preloaded images to volume ...
	W1229 07:24:36.453940  949749 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1229 07:24:36.454069  949749 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1229 07:24:36.553908  949749 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-136540 --name force-systemd-flag-136540 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-136540 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-136540 --network force-systemd-flag-136540 --ip 192.168.85.2 --volume force-systemd-flag-136540:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409
	I1229 07:24:36.535398  944930 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:24:36.920847  944930 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:24:37.020449  944930 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:24:37.199538  944930 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:24:37.864933  944930 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:24:37.865043  944930 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:24:37.866599  944930 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:24:37.871433  944930 out.go:252]   - Booting up control plane ...
	I1229 07:24:37.871535  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:24:37.871619  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:24:37.872637  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:24:37.889844  944930 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:24:37.890657  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:24:37.898413  944930 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:24:37.899217  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:24:37.899595  944930 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:24:38.054089  944930 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:24:38.054212  944930 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:24:36.921885  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Running}}
	I1229 07:24:36.949531  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:36.977208  949749 cli_runner.go:164] Run: docker exec force-systemd-flag-136540 stat /var/lib/dpkg/alternatives/iptables
	I1229 07:24:37.043401  949749 oci.go:144] the created container "force-systemd-flag-136540" has a running status.
	I1229 07:24:37.043446  949749 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa...
	I1229 07:24:37.613435  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1229 07:24:37.613488  949749 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1229 07:24:37.645753  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:37.677430  949749 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1229 07:24:37.677450  949749 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-136540 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1229 07:24:37.757532  949749 cli_runner.go:164] Run: docker container inspect force-systemd-flag-136540 --format={{.State.Status}}
	I1229 07:24:37.783838  949749 machine.go:94] provisionDockerMachine start ...
	I1229 07:24:37.783940  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:37.816369  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:37.816708  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:37.816718  949749 main.go:144] libmachine: About to run SSH command:
	hostname
	I1229 07:24:37.817297  949749 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48280->127.0.0.1:33762: read: connection reset by peer
	I1229 07:24:40.967978  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
	
	I1229 07:24:40.968004  949749 ubuntu.go:182] provisioning hostname "force-systemd-flag-136540"
	I1229 07:24:40.968074  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:40.986787  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:40.987162  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:40.987185  949749 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-136540 && echo "force-systemd-flag-136540" | sudo tee /etc/hostname
	I1229 07:24:41.155636  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-136540
	
	I1229 07:24:41.155724  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.177733  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.178031  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.178048  949749 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-136540' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-136540/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-136540' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1229 07:24:41.332316  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1229 07:24:41.332339  949749 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22353-723215/.minikube CaCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22353-723215/.minikube}
	I1229 07:24:41.332371  949749 ubuntu.go:190] setting up certificates
	I1229 07:24:41.332381  949749 provision.go:84] configureAuth start
	I1229 07:24:41.332439  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:41.349068  949749 provision.go:143] copyHostCerts
	I1229 07:24:41.349109  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:41.349165  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem, removing ...
	I1229 07:24:41.349180  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem
	I1229 07:24:41.349258  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/ca.pem (1082 bytes)
	I1229 07:24:41.349344  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:41.349367  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem, removing ...
	I1229 07:24:41.349374  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem
	I1229 07:24:41.349400  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/cert.pem (1123 bytes)
	I1229 07:24:41.349453  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:41.349475  949749 exec_runner.go:144] found /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem, removing ...
	I1229 07:24:41.349480  949749 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem
	I1229 07:24:41.349511  949749 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22353-723215/.minikube/key.pem (1675 bytes)
	I1229 07:24:41.349577  949749 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-136540 san=[127.0.0.1 192.168.85.2 force-systemd-flag-136540 localhost minikube]
	I1229 07:24:41.546735  949749 provision.go:177] copyRemoteCerts
	I1229 07:24:41.546817  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1229 07:24:41.546861  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.566148  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:41.671926  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1229 07:24:41.672027  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1229 07:24:41.689940  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1229 07:24:41.690004  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1229 07:24:41.707708  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1229 07:24:41.707770  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1229 07:24:41.725505  949749 provision.go:87] duration metric: took 393.100381ms to configureAuth
	I1229 07:24:41.725531  949749 ubuntu.go:206] setting minikube options for container-runtime
	I1229 07:24:41.725728  949749 config.go:182] Loaded profile config "force-systemd-flag-136540": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:24:41.725782  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.743373  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.743703  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.743713  949749 main.go:144] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1229 07:24:41.897630  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1229 07:24:41.897708  949749 ubuntu.go:71] root file system type: overlay
	I1229 07:24:41.897848  949749 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1229 07:24:41.897935  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:41.921519  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:41.921836  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:41.921950  949749 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1229 07:24:42.102668  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1229 07:24:42.102864  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:42.133688  949749 main.go:144] libmachine: Using SSH client type: native
	I1229 07:24:42.134051  949749 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dbbe0] 0x3de0e0 <nil>  [] 0s} 127.0.0.1 33762 <nil> <nil>}
	I1229 07:24:42.134080  949749 main.go:144] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1229 07:24:43.163247  949749 main.go:144] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:49:02.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-29 07:24:42.093571384 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1229 07:24:43.163281  949749 machine.go:97] duration metric: took 5.379421515s to provisionDockerMachine
	I1229 07:24:43.163293  949749 client.go:176] duration metric: took 11.016607482s to LocalClient.Create
	I1229 07:24:43.163321  949749 start.go:167] duration metric: took 11.016676896s to libmachine.API.Create "force-systemd-flag-136540"
	I1229 07:24:43.163335  949749 start.go:293] postStartSetup for "force-systemd-flag-136540" (driver="docker")
	I1229 07:24:43.163345  949749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1229 07:24:43.163421  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1229 07:24:43.163475  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.181417  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.288488  949749 ssh_runner.go:195] Run: cat /etc/os-release
	I1229 07:24:43.291782  949749 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1229 07:24:43.291809  949749 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1229 07:24:43.291822  949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/addons for local assets ...
	I1229 07:24:43.291880  949749 filesync.go:126] Scanning /home/jenkins/minikube-integration/22353-723215/.minikube/files for local assets ...
	I1229 07:24:43.291954  949749 filesync.go:149] local asset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> 7250782.pem in /etc/ssl/certs
	I1229 07:24:43.291962  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /etc/ssl/certs/7250782.pem
	I1229 07:24:43.292057  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1229 07:24:43.299384  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:43.317036  949749 start.go:296] duration metric: took 153.684905ms for postStartSetup
	I1229 07:24:43.317451  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:43.335322  949749 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/config.json ...
	I1229 07:24:43.335607  949749 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:24:43.335663  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.354609  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.461171  949749 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1229 07:24:43.466044  949749 start.go:128] duration metric: took 11.323009959s to createHost
	I1229 07:24:43.466091  949749 start.go:83] releasing machines lock for "force-systemd-flag-136540", held for 11.323168174s
	I1229 07:24:43.466184  949749 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-136540
	I1229 07:24:43.483271  949749 ssh_runner.go:195] Run: cat /version.json
	I1229 07:24:43.483331  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.483583  949749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1229 07:24:43.483648  949749 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-136540
	I1229 07:24:43.504986  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.516239  949749 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33762 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/force-systemd-flag-136540/id_rsa Username:docker}
	I1229 07:24:43.695447  949749 ssh_runner.go:195] Run: systemctl --version
	I1229 07:24:43.701895  949749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1229 07:24:43.706075  949749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1229 07:24:43.706145  949749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1229 07:24:43.733426  949749 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1229 07:24:43.733449  949749 start.go:496] detecting cgroup driver to use...
	I1229 07:24:43.733462  949749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:43.733554  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:43.747390  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1229 07:24:43.755747  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1229 07:24:43.764296  949749 containerd.go:147] configuring containerd to use "systemd" as cgroup driver...
	I1229 07:24:43.764426  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1229 07:24:43.773285  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:43.782062  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1229 07:24:43.790627  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1229 07:24:43.799083  949749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1229 07:24:43.806872  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1229 07:24:43.815660  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1229 07:24:43.824501  949749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1229 07:24:43.833359  949749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1229 07:24:43.840707  949749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1229 07:24:43.847859  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:43.958912  949749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1229 07:24:44.059070  949749 start.go:496] detecting cgroup driver to use...
	I1229 07:24:44.059146  949749 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1229 07:24:44.059226  949749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1229 07:24:44.075065  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:44.088639  949749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1229 07:24:44.122930  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1229 07:24:44.137375  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1229 07:24:44.155656  949749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1229 07:24:44.175473  949749 ssh_runner.go:195] Run: which cri-dockerd
	I1229 07:24:44.180371  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1229 07:24:44.190423  949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1229 07:24:44.205661  949749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1229 07:24:44.321544  949749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1229 07:24:44.440345  949749 docker.go:578] configuring docker to use "systemd" as cgroup driver...
	I1229 07:24:44.440462  949749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1229 07:24:44.454047  949749 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1229 07:24:44.466753  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:44.579909  949749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1229 07:24:44.997772  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1229 07:24:45.025871  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1229 07:24:45.048256  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:45.067946  949749 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1229 07:24:45.246433  949749 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1229 07:24:45.394951  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:45.519551  949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1229 07:24:45.535811  949749 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1229 07:24:45.548627  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:45.673698  949749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1229 07:24:45.747485  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1229 07:24:45.762101  949749 start.go:553] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1229 07:24:45.762224  949749 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1229 07:24:45.765988  949749 start.go:574] Will wait 60s for crictl version
	I1229 07:24:45.766089  949749 ssh_runner.go:195] Run: which crictl
	I1229 07:24:45.769514  949749 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1229 07:24:45.795220  949749 start.go:590] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1229 07:24:45.795343  949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:45.817012  949749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1229 07:24:45.845183  949749 out.go:252] * Preparing Kubernetes v1.35.0 on Docker 29.1.3 ...
	I1229 07:24:45.845304  949749 cli_runner.go:164] Run: docker network inspect force-systemd-flag-136540 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1229 07:24:45.862014  949749 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1229 07:24:45.865896  949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:45.875964  949749 kubeadm.go:884] updating cluster {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I1229 07:24:45.876083  949749 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
	I1229 07:24:45.876188  949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:45.893986  949749 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:45.894009  949749 docker.go:624] Images already preloaded, skipping extraction
	I1229 07:24:45.894075  949749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1229 07:24:45.911802  949749 docker.go:694] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0
	registry.k8s.io/kube-scheduler:v1.35.0
	registry.k8s.io/kube-controller-manager:v1.35.0
	registry.k8s.io/kube-proxy:v1.35.0
	registry.k8s.io/etcd:3.6.6-0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1229 07:24:45.911829  949749 cache_images.go:86] Images are preloaded, skipping loading
	I1229 07:24:45.911839  949749 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 docker true true} ...
	I1229 07:24:45.911933  949749 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=force-systemd-flag-136540 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1229 07:24:45.912006  949749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1229 07:24:45.963834  949749 cni.go:84] Creating CNI manager for ""
	I1229 07:24:45.963864  949749 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 07:24:45.963922  949749 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1229 07:24:45.963952  949749 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-136540 NodeName:force-systemd-flag-136540 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1229 07:24:45.964163  949749 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "force-systemd-flag-136540"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1229 07:24:45.964261  949749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I1229 07:24:45.972065  949749 binaries.go:51] Found k8s binaries, skipping transfer
	I1229 07:24:45.972197  949749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1229 07:24:45.979844  949749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1229 07:24:45.992556  949749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1229 07:24:46.006552  949749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1229 07:24:46.020398  949749 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1229 07:24:46.024230  949749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1229 07:24:46.035368  949749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1229 07:24:46.163494  949749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1229 07:24:46.184599  949749 certs.go:69] Setting up /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540 for IP: 192.168.85.2
	I1229 07:24:46.184618  949749 certs.go:195] generating shared ca certs ...
	I1229 07:24:46.184635  949749 certs.go:227] acquiring lock for ca certs: {Name:mk9c2ed6b225eba3a3b373f488351467f747c9ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.184776  949749 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key
	I1229 07:24:46.184825  949749 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key
	I1229 07:24:46.184837  949749 certs.go:257] generating profile certs ...
	I1229 07:24:46.184891  949749 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key
	I1229 07:24:46.184906  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt with IP's: []
	I1229 07:24:46.406421  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt ...
	I1229 07:24:46.406498  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.crt: {Name:mkeabcc81e93cc9bab177300f214aee09ffb34da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.406748  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key ...
	I1229 07:24:46.406796  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/client.key: {Name:mk1d3be86290b8aa5c0871eada27f23610866e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.406948  949749 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c
	I1229 07:24:46.407005  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1229 07:24:46.644365  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c ...
	I1229 07:24:46.644395  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c: {Name:mk20477dd3211295249f0fd8db3287c9ced07fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.644644  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c ...
	I1229 07:24:46.644661  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c: {Name:mk90a993a5735e7ecab2e7be38b0b8ea44299fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:46.644750  949749 certs.go:382] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt
	I1229 07:24:46.644835  949749 certs.go:386] copying /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key.58f5d73c -> /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key
	I1229 07:24:46.644897  949749 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key
	I1229 07:24:46.644913  949749 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt with IP's: []
	I1229 07:24:47.026929  949749 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt ...
	I1229 07:24:47.026978  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt: {Name:mk152d5d3beadbce81174a15f580235a4bfefeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:47.027179  949749 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key ...
	I1229 07:24:47.027195  949749 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key: {Name:mkd3178fa5a3e305677094e64826570746f84993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 07:24:47.027366  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1229 07:24:47.027396  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1229 07:24:47.027413  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1229 07:24:47.027428  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1229 07:24:47.027440  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1229 07:24:47.027462  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1229 07:24:47.027478  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1229 07:24:47.027488  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1229 07:24:47.027539  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem (1338 bytes)
	W1229 07:24:47.027580  949749 certs.go:480] ignoring /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078_empty.pem, impossibly tiny 0 bytes
	I1229 07:24:47.027593  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca-key.pem (1675 bytes)
	I1229 07:24:47.027622  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/ca.pem (1082 bytes)
	I1229 07:24:47.027655  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/cert.pem (1123 bytes)
	I1229 07:24:47.027688  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/key.pem (1675 bytes)
	I1229 07:24:47.027736  949749 certs.go:484] found cert: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem (1708 bytes)
	I1229 07:24:47.027771  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem -> /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.027789  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.027800  949749 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem -> /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.028420  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1229 07:24:47.047819  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1229 07:24:47.066416  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1229 07:24:47.083760  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1229 07:24:47.100871  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1229 07:24:47.118300  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1229 07:24:47.135827  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1229 07:24:47.154223  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/force-systemd-flag-136540/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1229 07:24:47.171152  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/ssl/certs/7250782.pem --> /usr/share/ca-certificates/7250782.pem (1708 bytes)
	I1229 07:24:47.188424  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1229 07:24:47.204881  949749 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22353-723215/.minikube/certs/725078.pem --> /usr/share/ca-certificates/725078.pem (1338 bytes)
	I1229 07:24:47.222920  949749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1229 07:24:47.236010  949749 ssh_runner.go:195] Run: openssl version
	I1229 07:24:47.242847  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.250549  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7250782.pem /etc/ssl/certs/7250782.pem
	I1229 07:24:47.257970  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.261605  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 29 06:52 /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.261667  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7250782.pem
	I1229 07:24:47.303672  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:47.311437  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/7250782.pem /etc/ssl/certs/3ec20f2e.0
	I1229 07:24:47.319608  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.327019  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1229 07:24:47.334490  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.338076  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 29 06:47 /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.338184  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1229 07:24:47.381190  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1229 07:24:47.388743  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1229 07:24:47.395955  949749 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.403397  949749 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/725078.pem /etc/ssl/certs/725078.pem
	I1229 07:24:47.410817  949749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.414638  949749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 29 06:52 /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.414707  949749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/725078.pem
	I1229 07:24:47.458494  949749 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1229 07:24:47.465936  949749 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/725078.pem /etc/ssl/certs/51391683.0
	I1229 07:24:47.473134  949749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1229 07:24:47.476718  949749 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1229 07:24:47.476770  949749 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-136540 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-136540 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 07:24:47.476884  949749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1229 07:24:47.493620  949749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1229 07:24:47.502107  949749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1229 07:24:47.509981  949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:24:47.510046  949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:24:47.517804  949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:24:47.517825  949749 kubeadm.go:158] found existing configuration files:
	
	I1229 07:24:47.517877  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:24:47.525590  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:24:47.525674  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:24:47.532930  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:24:47.540396  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:24:47.540486  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:24:47.547676  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:24:47.555165  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:24:47.555256  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:24:47.562475  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:24:47.570046  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:24:47.570109  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:24:47.577347  949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:24:47.617344  949749 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:24:47.617407  949749 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:24:47.711675  949749 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:24:47.711830  949749 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:24:47.711890  949749 kubeadm.go:319] OS: Linux
	I1229 07:24:47.711974  949749 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:24:47.712056  949749 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:24:47.712162  949749 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:24:47.712241  949749 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:24:47.712321  949749 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:24:47.712401  949749 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:24:47.712480  949749 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:24:47.712559  949749 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:24:47.712639  949749 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:24:47.783238  949749 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:24:47.783386  949749 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:24:47.783503  949749 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:24:47.800559  949749 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:24:47.807021  949749 out.go:252]   - Generating certificates and keys ...
	I1229 07:24:47.807150  949749 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:24:47.807244  949749 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:24:48.391180  949749 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1229 07:24:48.594026  949749 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1229 07:24:48.825994  949749 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1229 07:24:49.323806  949749 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1229 07:24:49.506950  949749 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1229 07:24:49.507188  949749 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:24:49.719847  949749 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1229 07:24:49.720093  949749 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1229 07:24:50.129385  949749 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1229 07:24:50.272350  949749 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1229 07:24:50.704674  949749 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1229 07:24:50.705019  949749 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:24:51.089352  949749 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:24:51.167795  949749 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:24:51.380140  949749 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:24:51.696561  949749 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:24:51.802016  949749 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:24:51.802726  949749 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:24:51.805447  949749 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:24:51.809325  949749 out.go:252]   - Booting up control plane ...
	I1229 07:24:51.809441  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:24:51.809530  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:24:51.809609  949749 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:24:51.825390  949749 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:24:51.825876  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:24:51.840218  949749 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:24:51.840883  949749 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:24:51.841100  949749 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:24:51.986925  949749 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:24:51.987097  949749 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:28:38.051085  944930 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001227018s
	I1229 07:28:38.051157  944930 kubeadm.go:319] 
	I1229 07:28:38.051223  944930 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:28:38.051300  944930 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:28:38.051410  944930 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:28:38.051416  944930 kubeadm.go:319] 
	I1229 07:28:38.051527  944930 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:28:38.051563  944930 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:28:38.051605  944930 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:28:38.051610  944930 kubeadm.go:319] 
	I1229 07:28:38.058849  944930 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:38.059443  944930 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:38.059598  944930 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:28:38.059869  944930 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:28:38.059880  944930 kubeadm.go:319] 
	I1229 07:28:38.059960  944930 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:28:38.060093  944930 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-262325 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001227018s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:28:38.060199  944930 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:28:38.479926  944930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:28:38.493494  944930 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:28:38.493596  944930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:28:38.502170  944930 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:28:38.502188  944930 kubeadm.go:158] found existing configuration files:
	
	I1229 07:28:38.502240  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:28:38.510027  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:28:38.510099  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:28:38.517614  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:28:38.525160  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:28:38.525235  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:28:38.532408  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:28:38.539798  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:28:38.539863  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:28:38.547721  944930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:28:38.556239  944930 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:28:38.556317  944930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:28:38.563562  944930 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:28:38.600473  944930 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I1229 07:28:38.600536  944930 kubeadm.go:319] [preflight] Running pre-flight checks
	I1229 07:28:38.697404  944930 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1229 07:28:38.697480  944930 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1229 07:28:38.697521  944930 kubeadm.go:319] OS: Linux
	I1229 07:28:38.697573  944930 kubeadm.go:319] CGROUPS_CPU: enabled
	I1229 07:28:38.697627  944930 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1229 07:28:38.697678  944930 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1229 07:28:38.697731  944930 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1229 07:28:38.697782  944930 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1229 07:28:38.697834  944930 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1229 07:28:38.697884  944930 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1229 07:28:38.697936  944930 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1229 07:28:38.697985  944930 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1229 07:28:38.775235  944930 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1229 07:28:38.775358  944930 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1229 07:28:38.775462  944930 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1229 07:28:38.787352  944930 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1229 07:28:38.793014  944930 out.go:252]   - Generating certificates and keys ...
	I1229 07:28:38.793177  944930 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1229 07:28:38.793288  944930 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1229 07:28:38.793407  944930 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1229 07:28:38.793528  944930 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1229 07:28:38.793634  944930 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1229 07:28:38.793741  944930 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1229 07:28:38.793846  944930 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1229 07:28:38.793963  944930 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1229 07:28:38.794050  944930 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1229 07:28:38.794131  944930 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1229 07:28:38.794181  944930 kubeadm.go:319] [certs] Using the existing "sa" key
	I1229 07:28:38.794240  944930 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1229 07:28:38.945013  944930 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1229 07:28:39.365721  944930 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1229 07:28:39.923785  944930 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1229 07:28:40.126540  944930 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1229 07:28:40.369067  944930 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1229 07:28:40.369710  944930 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1229 07:28:40.372351  944930 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1229 07:28:40.375721  944930 out.go:252]   - Booting up control plane ...
	I1229 07:28:40.375827  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1229 07:28:40.375917  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1229 07:28:40.375990  944930 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1229 07:28:40.395241  944930 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1229 07:28:40.395355  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1229 07:28:40.403534  944930 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1229 07:28:40.406687  944930 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1229 07:28:40.406741  944930 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1229 07:28:40.530422  944930 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1229 07:28:40.530543  944930 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1229 07:28:51.986578  949749 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00003953s
	I1229 07:28:51.986615  949749 kubeadm.go:319] 
	I1229 07:28:51.986711  949749 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:28:51.986761  949749 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:28:51.986866  949749 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:28:51.986874  949749 kubeadm.go:319] 
	I1229 07:28:51.986980  949749 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:28:51.987012  949749 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:28:51.987044  949749 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:28:51.987048  949749 kubeadm.go:319] 
	I1229 07:28:51.991310  949749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:51.991737  949749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:51.991851  949749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:28:51.992128  949749 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1229 07:28:51.992138  949749 kubeadm.go:319] 
	I1229 07:28:51.992206  949749 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1229 07:28:51.992360  949749 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-136540 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00003953s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1229 07:28:51.992440  949749 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1229 07:28:52.418971  949749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:28:52.431883  949749 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1229 07:28:52.431947  949749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1229 07:28:52.439564  949749 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1229 07:28:52.439582  949749 kubeadm.go:158] found existing configuration files:
	
	I1229 07:28:52.439631  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1229 07:28:52.447231  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1229 07:28:52.447294  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1229 07:28:52.454516  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1229 07:28:52.462044  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1229 07:28:52.462110  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1229 07:28:52.469355  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1229 07:28:52.476888  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1229 07:28:52.476953  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1229 07:28:52.484710  949749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1229 07:28:52.492047  949749 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1229 07:28:52.492108  949749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1229 07:28:52.499152  949749 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1229 07:28:52.615409  949749 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:28:52.615841  949749 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:28:52.688523  949749 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:32:40.530696  944930 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000288867s
	I1229 07:32:40.530728  944930 kubeadm.go:319] 
	I1229 07:32:40.530784  944930 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1229 07:32:40.530824  944930 kubeadm.go:319] 	- The kubelet is not running
	I1229 07:32:40.530957  944930 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1229 07:32:40.530973  944930 kubeadm.go:319] 
	I1229 07:32:40.531080  944930 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1229 07:32:40.531114  944930 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1229 07:32:40.531145  944930 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1229 07:32:40.531149  944930 kubeadm.go:319] 
	I1229 07:32:40.535677  944930 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1229 07:32:40.536136  944930 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1229 07:32:40.536249  944930 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1229 07:32:40.536484  944930 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1229 07:32:40.536490  944930 kubeadm.go:319] 
	I1229 07:32:40.536559  944930 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1229 07:32:40.536616  944930 kubeadm.go:403] duration metric: took 8m7.002868565s to StartCluster
	I1229 07:32:40.536662  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1229 07:32:40.536722  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I1229 07:32:40.576172  944930 cri.go:96] found id: ""
	I1229 07:32:40.576210  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.576220  944930 logs.go:284] No container was found matching "kube-apiserver"
	I1229 07:32:40.576226  944930 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1229 07:32:40.576289  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I1229 07:32:40.601170  944930 cri.go:96] found id: ""
	I1229 07:32:40.601196  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.601205  944930 logs.go:284] No container was found matching "etcd"
	I1229 07:32:40.601212  944930 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1229 07:32:40.601271  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I1229 07:32:40.627437  944930 cri.go:96] found id: ""
	I1229 07:32:40.627475  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.627484  944930 logs.go:284] No container was found matching "coredns"
	I1229 07:32:40.627490  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1229 07:32:40.627547  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I1229 07:32:40.653898  944930 cri.go:96] found id: ""
	I1229 07:32:40.653926  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.653935  944930 logs.go:284] No container was found matching "kube-scheduler"
	I1229 07:32:40.653942  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1229 07:32:40.654002  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I1229 07:32:40.679278  944930 cri.go:96] found id: ""
	I1229 07:32:40.679303  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.679312  944930 logs.go:284] No container was found matching "kube-proxy"
	I1229 07:32:40.679320  944930 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1229 07:32:40.679376  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I1229 07:32:40.704607  944930 cri.go:96] found id: ""
	I1229 07:32:40.704633  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.704642  944930 logs.go:284] No container was found matching "kube-controller-manager"
	I1229 07:32:40.704650  944930 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1229 07:32:40.704706  944930 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I1229 07:32:40.729518  944930 cri.go:96] found id: ""
	I1229 07:32:40.729540  944930 logs.go:282] 0 containers: []
	W1229 07:32:40.729548  944930 logs.go:284] No container was found matching "kindnet"
	I1229 07:32:40.729560  944930 logs.go:123] Gathering logs for describe nodes ...
	I1229 07:32:40.729573  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1229 07:32:40.797226  944930 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:40.789353    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.789964    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791459    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791927    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.793356    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1229 07:32:40.789353    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.789964    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791459    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.791927    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:40.793356    5473 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1229 07:32:40.797248  944930 logs.go:123] Gathering logs for Docker ...
	I1229 07:32:40.797262  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1229 07:32:40.820013  944930 logs.go:123] Gathering logs for container status ...
	I1229 07:32:40.820050  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1229 07:32:40.847953  944930 logs.go:123] Gathering logs for kubelet ...
	I1229 07:32:40.847981  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1229 07:32:40.905225  944930 logs.go:123] Gathering logs for dmesg ...
	I1229 07:32:40.905262  944930 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1229 07:32:40.923718  944930 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1229 07:32:40.923764  944930 out.go:285] * 
	W1229 07:32:40.923815  944930 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:40.923831  944930 out.go:285] * 
	W1229 07:32:40.924083  944930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1229 07:32:40.930857  944930 out.go:203] 
	W1229 07:32:40.934565  944930 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000288867s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1229 07:32:40.934610  944930 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1229 07:32:40.934630  944930 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1229 07:32:40.937710  944930 out.go:203] 
	
	
	==> Docker <==
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.170942798Z" level=info msg="Restoring containers: start."
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.186035854Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.200635959Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.444281220Z" level=info msg="Loading containers: done."
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.461462687Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.461619598Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.461715702Z" level=info msg="Initializing buildkit"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.483691296Z" level=info msg="Completed buildkit initialization"
	Dec 29 07:24:30 force-systemd-env-262325 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.489245426Z" level=info msg="Daemon has completed initialization"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.494173646Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.494289860Z" level=info msg="API listen on /run/docker.sock"
	Dec 29 07:24:30 force-systemd-env-262325 dockerd[1142]: time="2025-12-29T07:24:30.494308526Z" level=info msg="API listen on [::]:2376"
	Dec 29 07:24:31 force-systemd-env-262325 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Start docker client with request timeout 0s"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Loaded network plugin cni"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Setting cgroupDriver systemd"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 29 07:24:31 force-systemd-env-262325 cri-dockerd[1425]: time="2025-12-29T07:24:31Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 29 07:24:31 force-systemd-env-262325 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1229 07:32:42.411467    5624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:42.412391    5624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:42.413989    5624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:42.414650    5624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1229 07:32:42.416288    5624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec29 06:14] hrtimer: interrupt took 41514710 ns
	[Dec29 06:33] kauditd_printk_skb: 8 callbacks suppressed
	[Dec29 06:45] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> kernel <==
	 07:32:42 up  4:15,  0 user,  load average: 0.15, 0.75, 1.73
	Linux force-systemd-env-262325 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 29 07:32:38 force-systemd-env-262325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:39 force-systemd-env-262325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 318.
	Dec 29 07:32:39 force-systemd-env-262325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:39 force-systemd-env-262325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:39 force-systemd-env-262325 kubelet[5400]: E1229 07:32:39.681679    5400 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:39 force-systemd-env-262325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:39 force-systemd-env-262325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:40 force-systemd-env-262325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 29 07:32:40 force-systemd-env-262325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:40 force-systemd-env-262325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:40 force-systemd-env-262325 kubelet[5405]: E1229 07:32:40.435178    5405 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:40 force-systemd-env-262325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:40 force-systemd-env-262325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:41 force-systemd-env-262325 kubelet[5496]: E1229 07:32:41.204875    5496 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 29 07:32:41 force-systemd-env-262325 kubelet[5543]: E1229 07:32:41.954176    5543 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 29 07:32:41 force-systemd-env-262325 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-262325 -n force-systemd-env-262325
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-262325 -n force-systemd-env-262325: exit status 6 (347.112556ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:32:42.906934  960074 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-262325" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-262325" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-262325" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-262325
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-262325: (1.800016387s)
--- FAIL: TestForceSystemdEnv (508.37s)

                                                
                                    

Test pass (324/352)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 3.12
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 54.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.17
27 TestAddons/Setup 138.61
29 TestAddons/serial/Volcano 41.47
31 TestAddons/serial/GCPAuth/Namespaces 0.22
32 TestAddons/serial/GCPAuth/FakeCredentials 10.95
35 TestAddons/parallel/Registry 15.35
36 TestAddons/parallel/RegistryCreds 0.7
37 TestAddons/parallel/Ingress 19.89
38 TestAddons/parallel/InspektorGadget 10.92
39 TestAddons/parallel/MetricsServer 5.75
41 TestAddons/parallel/CSI 50.93
42 TestAddons/parallel/Headlamp 18
43 TestAddons/parallel/CloudSpanner 5.59
44 TestAddons/parallel/LocalPath 9.65
45 TestAddons/parallel/NvidiaDevicePlugin 5.6
46 TestAddons/parallel/Yakd 11.73
48 TestAddons/StoppedEnableDisable 11.4
49 TestCertOptions 35
50 TestCertExpiration 245.06
51 TestDockerFlags 33.51
58 TestErrorSpam/setup 27.32
59 TestErrorSpam/start 0.78
60 TestErrorSpam/status 1.13
61 TestErrorSpam/pause 1.5
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 11.57
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 73.54
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 41.85
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.91
75 TestFunctional/serial/CacheCmd/cache/add_local 1.02
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 45.67
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.2
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 5.05
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 15.37
91 TestFunctional/parallel/DryRun 0.63
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.1
97 TestFunctional/parallel/ServiceCmdConnect 8.6
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.97
101 TestFunctional/parallel/SSHCmd 0.82
102 TestFunctional/parallel/CpCmd 2.66
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.84
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
113 TestFunctional/parallel/License 0.31
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.5
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
125 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
129 TestFunctional/parallel/MountCmd/any-port 9.15
130 TestFunctional/parallel/ServiceCmd/List 0.52
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.4
135 TestFunctional/parallel/MountCmd/specific-port 2.62
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.67
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.18
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
144 TestFunctional/parallel/ImageCommands/Setup 0.59
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.98
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
155 TestFunctional/parallel/DockerEnv/bash 1.04
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 153.09
164 TestMultiControlPlane/serial/DeployApp 8.24
165 TestMultiControlPlane/serial/PingHostFromPods 1.75
166 TestMultiControlPlane/serial/AddWorkerNode 65.16
167 TestMultiControlPlane/serial/NodeLabels 0.14
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.24
169 TestMultiControlPlane/serial/CopyFile 21.74
170 TestMultiControlPlane/serial/StopSecondaryNode 12.03
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
172 TestMultiControlPlane/serial/RestartSecondaryNode 44.4
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 191.56
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.58
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.86
177 TestMultiControlPlane/serial/StopCluster 33.44
178 TestMultiControlPlane/serial/RestartCluster 68.69
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.9
180 TestMultiControlPlane/serial/AddSecondaryNode 55.09
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
184 TestImageBuild/serial/Setup 28.45
185 TestImageBuild/serial/NormalBuild 1.56
186 TestImageBuild/serial/BuildWithBuildArg 0.94
187 TestImageBuild/serial/BuildWithDockerIgnore 0.79
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.94
193 TestJSONOutput/start/Command 67.22
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.7
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.59
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 11.15
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 27.72
219 TestKicCustomNetwork/use_default_bridge_network 30.95
220 TestKicExistingNetwork 30.62
221 TestKicCustomSubnet 30.53
222 TestKicStaticIP 30.24
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 67.32
227 TestMountStart/serial/StartWithMountFirst 10.62
228 TestMountStart/serial/VerifyMountFirst 0.27
229 TestMountStart/serial/StartWithMountSecond 9.92
230 TestMountStart/serial/VerifyMountSecond 0.28
231 TestMountStart/serial/DeleteFirst 1.54
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.28
234 TestMountStart/serial/RestartStopped 10.21
235 TestMountStart/serial/VerifyMountPostStop 0.27
238 TestMultiNode/serial/FreshStart2Nodes 82.26
239 TestMultiNode/serial/DeployApp2Nodes 5.88
240 TestMultiNode/serial/PingHostFrom2Pods 1
241 TestMultiNode/serial/AddNode 34.55
242 TestMultiNode/serial/MultiNodeLabels 0.09
243 TestMultiNode/serial/ProfileList 0.74
244 TestMultiNode/serial/CopyFile 10.71
245 TestMultiNode/serial/StopNode 2.57
246 TestMultiNode/serial/StartAfterStop 9.28
247 TestMultiNode/serial/RestartKeepsNodes 74.53
248 TestMultiNode/serial/DeleteNode 5.76
249 TestMultiNode/serial/StopMultiNode 22.06
250 TestMultiNode/serial/RestartMultiNode 55.26
251 TestMultiNode/serial/ValidateNameConflict 31.92
258 TestScheduledStopUnix 100.79
259 TestSkaffold 134.32
261 TestInsufficientStorage 12.59
262 TestRunningBinaryUpgrade 89.25
264 TestKubernetesUpgrade 361.68
265 TestMissingContainerUpgrade 95.38
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
268 TestNoKubernetes/serial/StartWithK8s 37.41
269 TestNoKubernetes/serial/StartWithStopK8s 12.99
270 TestNoKubernetes/serial/Start 10.16
282 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
284 TestNoKubernetes/serial/ProfileList 0.88
285 TestNoKubernetes/serial/Stop 2.24
286 TestNoKubernetes/serial/StartNoArgs 8.59
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
288 TestStoppedBinaryUpgrade/Setup 0.84
289 TestStoppedBinaryUpgrade/Upgrade 339.87
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
298 TestPreload/Start-NoPreload-PullImage 80.88
299 TestPreload/Restart-With-Preload-Check-User-Image 49.58
302 TestPause/serial/Start 69.97
303 TestPause/serial/SecondStartNoReconfiguration 43.08
304 TestNetworkPlugins/group/auto/Start 75.78
305 TestPause/serial/Pause 0.74
306 TestPause/serial/VerifyStatus 0.39
307 TestPause/serial/Unpause 0.67
308 TestPause/serial/PauseAgain 1.05
309 TestPause/serial/DeletePaused 2.65
310 TestPause/serial/VerifyDeletedResources 0.77
311 TestNetworkPlugins/group/kindnet/Start 51.54
312 TestNetworkPlugins/group/auto/KubeletFlags 0.31
313 TestNetworkPlugins/group/auto/NetCatPod 10.29
314 TestNetworkPlugins/group/auto/DNS 0.31
315 TestNetworkPlugins/group/auto/Localhost 0.23
316 TestNetworkPlugins/group/auto/HairPin 0.23
317 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
319 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
320 TestNetworkPlugins/group/kindnet/DNS 0.29
321 TestNetworkPlugins/group/kindnet/Localhost 0.28
322 TestNetworkPlugins/group/kindnet/HairPin 0.33
323 TestNetworkPlugins/group/calico/Start 69.05
324 TestNetworkPlugins/group/custom-flannel/Start 55.54
325 TestNetworkPlugins/group/calico/ControllerPod 6.01
326 TestNetworkPlugins/group/calico/KubeletFlags 0.44
327 TestNetworkPlugins/group/calico/NetCatPod 10.27
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.47
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.35
330 TestNetworkPlugins/group/calico/DNS 0.22
331 TestNetworkPlugins/group/calico/Localhost 0.17
332 TestNetworkPlugins/group/calico/HairPin 0.16
333 TestNetworkPlugins/group/custom-flannel/DNS 0.2
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
336 TestNetworkPlugins/group/false/Start 75.75
337 TestNetworkPlugins/group/enable-default-cni/Start 71.02
338 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
339 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
340 TestNetworkPlugins/group/false/KubeletFlags 0.38
341 TestNetworkPlugins/group/false/NetCatPod 11.3
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
345 TestNetworkPlugins/group/false/DNS 0.2
346 TestNetworkPlugins/group/false/Localhost 0.17
347 TestNetworkPlugins/group/false/HairPin 0.16
348 TestNetworkPlugins/group/flannel/Start 53.36
349 TestNetworkPlugins/group/bridge/Start 49.1
350 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
351 TestNetworkPlugins/group/bridge/NetCatPod 9.29
352 TestNetworkPlugins/group/flannel/ControllerPod 6.01
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
354 TestNetworkPlugins/group/flannel/NetCatPod 9.26
355 TestNetworkPlugins/group/bridge/DNS 0.24
356 TestNetworkPlugins/group/bridge/Localhost 0.18
357 TestNetworkPlugins/group/bridge/HairPin 0.18
358 TestNetworkPlugins/group/flannel/DNS 0.27
359 TestNetworkPlugins/group/flannel/Localhost 0.23
360 TestNetworkPlugins/group/flannel/HairPin 0.24
361 TestNetworkPlugins/group/kubenet/Start 72.39
363 TestStartStop/group/old-k8s-version/serial/FirstStart 90.86
364 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
365 TestNetworkPlugins/group/kubenet/NetCatPod 10.28
366 TestNetworkPlugins/group/kubenet/DNS 0.21
367 TestNetworkPlugins/group/kubenet/Localhost 0.16
368 TestNetworkPlugins/group/kubenet/HairPin 0.16
369 TestStartStop/group/old-k8s-version/serial/DeployApp 11.5
371 TestStartStop/group/no-preload/serial/FirstStart 77.12
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.03
373 TestStartStop/group/old-k8s-version/serial/Stop 11.62
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
375 TestStartStop/group/old-k8s-version/serial/SecondStart 62.51
376 TestStartStop/group/no-preload/serial/DeployApp 10.34
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
379 TestStartStop/group/no-preload/serial/Stop 11.51
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
382 TestStartStop/group/old-k8s-version/serial/Pause 3.83
383 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.46
384 TestStartStop/group/no-preload/serial/SecondStart 54.09
386 TestStartStop/group/embed-certs/serial/FirstStart 75.13
387 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
388 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
389 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
390 TestStartStop/group/no-preload/serial/Pause 3.08
392 TestStartStop/group/newest-cni/serial/FirstStart 36.34
393 TestStartStop/group/embed-certs/serial/DeployApp 9.39
394 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.45
395 TestStartStop/group/embed-certs/serial/Stop 11.66
396 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
397 TestStartStop/group/embed-certs/serial/SecondStart 56.67
398 TestStartStop/group/newest-cni/serial/DeployApp 0
399 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
400 TestStartStop/group/newest-cni/serial/Stop 11.42
401 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
402 TestStartStop/group/newest-cni/serial/SecondStart 17.57
403 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
405 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
406 TestStartStop/group/newest-cni/serial/Pause 3.29
408 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.54
409 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
411 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
412 TestStartStop/group/embed-certs/serial/Pause 3.83
413 TestPreload/PreloadSrc/gcs 4.4
414 TestPreload/PreloadSrc/github 13.97
415 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
416 TestPreload/PreloadSrc/gcs-cached 0.46
417 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
418 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
419 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
420 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.3
421 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
422 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
423 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
424 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.04
x
+
TestDownloadOnly/v1.28.0/json-events (6.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-383694 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-383694 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.359670087s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1229 06:46:21.734819  725078 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1229 06:46:21.734904  725078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-383694
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-383694: exit status 85 (90.802361ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-383694 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-383694 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:15.420618  725084 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:15.421020  725084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:15.421059  725084 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:15.421080  725084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:15.421529  725084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	W1229 06:46:15.421787  725084 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22353-723215/.minikube/config/config.json: open /home/jenkins/minikube-integration/22353-723215/.minikube/config/config.json: no such file or directory
	I1229 06:46:15.422264  725084 out.go:368] Setting JSON to true
	I1229 06:46:15.423122  725084 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12525,"bootTime":1766978251,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 06:46:15.423262  725084 start.go:143] virtualization:  
	I1229 06:46:15.429325  725084 out.go:99] [download-only-383694] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1229 06:46:15.429591  725084 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball: no such file or directory
	I1229 06:46:15.429629  725084 notify.go:221] Checking for updates...
	I1229 06:46:15.432842  725084 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:15.436304  725084 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:15.439497  725084 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 06:46:15.442636  725084 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 06:46:15.445769  725084 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 06:46:15.451504  725084 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:15.451801  725084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:15.484028  725084 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:46:15.484136  725084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:15.538831  725084 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 06:46:15.529895346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:15.538931  725084 docker.go:319] overlay module found
	I1229 06:46:15.542006  725084 out.go:99] Using the docker driver based on user configuration
	I1229 06:46:15.542047  725084 start.go:309] selected driver: docker
	I1229 06:46:15.542054  725084 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:15.542160  725084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:15.604003  725084 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-29 06:46:15.594507183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:15.604250  725084 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:15.604534  725084 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 06:46:15.604684  725084 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:15.607853  725084 out.go:171] Using Docker driver with root privileges
	I1229 06:46:15.610802  725084 cni.go:84] Creating CNI manager for ""
	I1229 06:46:15.610872  725084 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1229 06:46:15.610887  725084 start_flags.go:342] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1229 06:46:15.610962  725084 start.go:353] cluster config:
	{Name:download-only-383694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-383694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:46:15.613924  725084 out.go:99] Starting "download-only-383694" primary control-plane node in "download-only-383694" cluster
	I1229 06:46:15.613946  725084 cache.go:134] Beginning downloading kic base image for docker with docker
	I1229 06:46:15.616882  725084 out.go:99] Pulling base image v0.0.48-1766979815-22353 ...
	I1229 06:46:15.616921  725084 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1229 06:46:15.617071  725084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local docker daemon
	I1229 06:46:15.632515  725084 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:46:15.632688  725084 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 in local cache directory
	I1229 06:46:15.632787  725084 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 to local cache
	I1229 06:46:15.667114  725084 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1229 06:46:15.667151  725084 cache.go:65] Caching tarball of preloaded images
	I1229 06:46:15.667321  725084 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1229 06:46:15.670634  725084 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1229 06:46:15.670666  725084 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1229 06:46:15.670675  725084 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 from gcs api...
	I1229 06:46:15.751401  725084 preload.go:313] Got checksum from GCS API "002a73d62a3b066a08573cf3da2c8cb4"
	I1229 06:46:15.751534  725084 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I1229 06:46:18.605846  725084 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1229 06:46:18.606521  725084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/download-only-383694/config.json ...
	I1229 06:46:18.606565  725084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/download-only-383694/config.json: {Name:mk51f9b35d9f724a4f5c49c1a876eff891107d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1229 06:46:18.606781  725084 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1229 06:46:18.607022  725084 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22353-723215/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-383694 host does not exist
	  To start a cluster, run: "minikube start -p download-only-383694"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-383694
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (3.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-677553 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-677553 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.123108233s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (3.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I1229 06:46:25.297151  725078 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime docker
I1229 06:46:25.297185  725078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-677553
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-677553: exit status 85 (93.981921ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-383694 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-383694 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ delete  │ -p download-only-383694                                                                                                                                                       │ download-only-383694 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │ 29 Dec 25 06:46 UTC │
	│ start   │ -o=json --download-only -p download-only-677553 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-677553 │ jenkins │ v1.37.0 │ 29 Dec 25 06:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/29 06:46:22
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1229 06:46:22.214224  725286 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:46:22.214332  725286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:22.214343  725286 out.go:374] Setting ErrFile to fd 2...
	I1229 06:46:22.214349  725286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:46:22.214594  725286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 06:46:22.214981  725286 out.go:368] Setting JSON to true
	I1229 06:46:22.215761  725286 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12532,"bootTime":1766978251,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 06:46:22.215829  725286 start.go:143] virtualization:  
	I1229 06:46:22.219122  725286 out.go:99] [download-only-677553] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 06:46:22.219333  725286 notify.go:221] Checking for updates...
	I1229 06:46:22.222184  725286 out.go:171] MINIKUBE_LOCATION=22353
	I1229 06:46:22.225165  725286 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:46:22.228180  725286 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 06:46:22.231044  725286 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 06:46:22.233895  725286 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1229 06:46:22.239512  725286 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1229 06:46:22.239821  725286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:46:22.276418  725286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:46:22.276526  725286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:22.328863  725286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-29 06:46:22.319710346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:22.328978  725286 docker.go:319] overlay module found
	I1229 06:46:22.331952  725286 out.go:99] Using the docker driver based on user configuration
	I1229 06:46:22.331994  725286 start.go:309] selected driver: docker
	I1229 06:46:22.332002  725286 start.go:928] validating driver "docker" against <nil>
	I1229 06:46:22.332102  725286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:46:22.386621  725286 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-29 06:46:22.377074418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:46:22.386769  725286 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I1229 06:46:22.387038  725286 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1229 06:46:22.387203  725286 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I1229 06:46:22.390384  725286 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-677553 host does not exist
	  To start a cluster, run: "minikube start -p download-only-677553"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-677553
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1229 06:46:26.454423  725078 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-977976 --alsologtostderr --binary-mirror http://127.0.0.1:45761 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-977976" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-977976
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (54.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-611575 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-611575 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (52.266749868s)
helpers_test.go:176: Cleaning up "offline-docker-611575" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-611575
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-611575: (2.328685764s)
--- PASS: TestOffline (54.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-762064
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-762064: exit status 85 (170.855689ms)

                                                
                                                
-- stdout --
	* Profile "addons-762064" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-762064"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-762064
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-762064: exit status 85 (172.259341ms)

                                                
                                                
-- stdout --
	* Profile "addons-762064" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-762064"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/Setup (138.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-762064 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-762064 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.606025969s)
--- PASS: TestAddons/Setup (138.61s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 57.468595ms
addons_test.go:870: volcano-scheduler stabilized in 57.65104ms
addons_test.go:878: volcano-admission stabilized in 57.840476ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-7764798495-lhvvk" [1eee7ed9-c6ac-424b-a823-190ac1afcc04] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004122597s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-5986c947c8-7rh99" [db343c96-6394-4f0c-a862-d5e6933618b3] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003902305s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-d9cfd74d6-8ffhh" [034aaf87-25e3-449e-99d7-3e5e7e4f8e54] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003729614s
addons_test.go:905: (dbg) Run:  kubectl --context addons-762064 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-762064 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-762064 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [55522a99-13f0-4dd6-bbe8-ff2514c1c711] Pending
helpers_test.go:353: "test-job-nginx-0" [55522a99-13f0-4dd6-bbe8-ff2514c1c711] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [55522a99-13f0-4dd6-bbe8-ff2514c1c711] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003143285s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable volcano --alsologtostderr -v=1: (11.831574643s)
--- PASS: TestAddons/serial/Volcano (41.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-762064 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-762064 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.95s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-762064 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-762064 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7f3c5cb5-dbce-4fa1-b796-cb183838415b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [7f3c5cb5-dbce-4fa1-b796-cb183838415b] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.009807916s
addons_test.go:696: (dbg) Run:  kubectl --context addons-762064 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-762064 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-762064 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-762064 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.95s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.481599ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-wtw87" [531f42e4-7fd5-41e4-89ec-fe824ad10bc1] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004145998s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-cpmbm" [216344d7-4b4f-4d3c-8d51-2e900f8d8b5b] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003997866s
addons_test.go:394: (dbg) Run:  kubectl --context addons-762064 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-762064 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-762064 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.322268826s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 ip
2025/12/29 06:50:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.35s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.523507ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-762064
addons_test.go:334: (dbg) Run:  kubectl --context addons-762064 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-762064 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-762064 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-762064 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [5c09ec14-c395-49e7-b1a9-f2c79b933e9d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [5c09ec14-c395-49e7-b1a9-f2c79b933e9d] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003221308s
I1229 06:50:54.169212  725078 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-762064 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable ingress-dns --alsologtostderr -v=1: (1.679466529s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable ingress --alsologtostderr -v=1: (8.004463296s)
--- PASS: TestAddons/parallel/Ingress (19.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-cf5p2" [77e2c0ea-b2e7-437a-b58d-13fd08318c8b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004420761s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable inspektor-gadget --alsologtostderr -v=1: (5.919522601s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.298718ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-2lrjm" [0ff13e60-8ac6-47b6-9b21-0e26bf6e8061] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003988544s
addons_test.go:465: (dbg) Run:  kubectl --context addons-762064 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1229 06:50:12.774739  725078 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1229 06:50:12.778109  725078 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1229 06:50:12.778137  725078 kapi.go:107] duration metric: took 6.857153ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.8686ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-762064 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-762064 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [582235f3-c1f3-487c-ad81-1fec4e29df93] Pending
helpers_test.go:353: "task-pv-pod" [582235f3-c1f3-487c-ad81-1fec4e29df93] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [582235f3-c1f3-487c-ad81-1fec4e29df93] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004110233s
addons_test.go:574: (dbg) Run:  kubectl --context addons-762064 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-762064 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-762064 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-762064 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-762064 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-762064 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-762064 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [425f6650-175b-42b7-9f1f-f31565fa7425] Pending
helpers_test.go:353: "task-pv-pod-restore" [425f6650-175b-42b7-9f1f-f31565fa7425] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [425f6650-175b-42b7-9f1f-f31565fa7425] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005433502s
addons_test.go:616: (dbg) Run:  kubectl --context addons-762064 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-762064 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-762064 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable volumesnapshots --alsologtostderr -v=1: (1.340292358s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.991861325s)
--- PASS: TestAddons/parallel/CSI (50.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-762064 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-762064 --alsologtostderr -v=1: (1.035442074s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-6d8d595f-xp6qb" [162cf3a5-bba3-4f13-8203-f46540a3f335] Pending
helpers_test.go:353: "headlamp-6d8d595f-xp6qb" [162cf3a5-bba3-4f13-8203-f46540a3f335] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-6d8d595f-xp6qb" [162cf3a5-bba3-4f13-8203-f46540a3f335] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.005063945s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable headlamp --alsologtostderr -v=1: (5.959535078s)
--- PASS: TestAddons/parallel/Headlamp (18.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-lvrpk" [9f67d9ff-cab0-4b6d-b1a8-f0394a5e63e1] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004426492s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-762064 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-762064 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [e02cbe78-655d-44d0-8882-fc0b7ae37ffb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [e02cbe78-655d-44d0-8882-fc0b7ae37ffb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [e02cbe78-655d-44d0-8882-fc0b7ae37ffb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.007574677s
addons_test.go:969: (dbg) Run:  kubectl --context addons-762064 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 ssh "cat /opt/local-path-provisioner/pvc-b0d253e1-827b-4f43-9e89-e10413fe8c5f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-762064 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-762064 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-qxbs4" [05f1f46c-9b87-402b-88b9-c81215db5e8a] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004121538s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-nrvdf" [a06ea43c-3c65-4d41-b3f9-0f35db86e5b0] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003903648s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-762064 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-arm64 -p addons-762064 addons disable yakd --alsologtostderr -v=1: (5.725502019s)
--- PASS: TestAddons/parallel/Yakd (11.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-762064
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-762064: (11.11979576s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-762064
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-762064
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-762064
--- PASS: TestAddons/StoppedEnableDisable (11.40s)

                                                
                                    
x
+
TestCertOptions (35s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-173919 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-173919 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.079691836s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-173919 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-173919 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-173919 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-173919" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-173919
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-173919: (2.235238368s)
--- PASS: TestCertOptions (35.00s)

                                                
                                    
x
+
TestCertExpiration (245.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-726957 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1229 07:33:05.630306  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-726957 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.268904859s)
E1229 07:33:46.069824  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-726957 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-726957 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (28.419101434s)
helpers_test.go:176: Cleaning up "cert-expiration-726957" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-726957
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-726957: (2.375585286s)
--- PASS: TestCertExpiration (245.06s)

                                                
                                    
x
+
TestDockerFlags (33.51s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-139514 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1229 07:32:50.283558  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-139514 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.413273246s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-139514 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-139514 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-139514" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-139514
E1229 07:33:17.970295  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-139514: (2.346471071s)
--- PASS: TestDockerFlags (33.51s)

                                                
                                    
x
+
TestErrorSpam/setup (27.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-920436 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-920436 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-920436 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-920436 --driver=docker  --container-runtime=docker: (27.324651742s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (27.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (11.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 stop: (11.370068755s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-920436 --log_dir /tmp/nospam-920436 stop
--- PASS: TestErrorSpam/stop (11.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22353-723215/.minikube/files/etc/test/nested/copy/725078/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-175099 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m13.542183381s)
--- PASS: TestFunctional/serial/StartWithProxy (73.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1229 06:53:19.030688  725078 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --alsologtostderr -v=8
E1229 06:53:46.071860  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.077939  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.089713  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.109984  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.150381  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.230886  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.391361  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:46.711586  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:47.352646  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:48.633216  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:51.193417  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:53:56.314056  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-175099 --alsologtostderr -v=8: (41.845200891s)
functional_test.go:678: soft start took 41.848440464s for "functional-175099" cluster.
I1229 06:54:00.876254  725078 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (41.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-175099 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-175099 /tmp/TestFunctionalserialCacheCmdcacheadd_local2784116322/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache add minikube-local-cache-test:functional-175099
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache delete minikube-local-cache-test:functional-175099
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-175099
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.515226ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cache reload
E1229 06:54:06.554474  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 kubectl -- --context functional-175099 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-175099 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1229 06:54:27.035287  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-175099 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.669037513s)
functional_test.go:776: restart took 45.669159947s for "functional-175099" cluster.
I1229 06:54:53.003728  725078 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (45.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-175099 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-175099 logs: (1.197881161s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 logs --file /tmp/TestFunctionalserialLogsFileCmd1779843099/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-175099 logs --file /tmp/TestFunctionalserialLogsFileCmd1779843099/001/logs.txt: (1.243341333s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-175099 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-175099
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-175099: exit status 115 (692.501528ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31364 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-175099 delete -f testdata/invalidsvc.yaml
functional_test.go:2337: (dbg) Done: kubectl --context functional-175099 delete -f testdata/invalidsvc.yaml: (1.081465593s)
--- PASS: TestFunctional/serial/InvalidService (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 config get cpus: exit status 14 (83.458069ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 config get cpus: exit status 14 (60.35709ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-175099 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-175099 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 767646: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-175099 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (272.45725ms)

                                                
                                                
-- stdout --
	* [functional-175099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:55:32.918817  767016 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:55:32.919354  767016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:32.919416  767016 out.go:374] Setting ErrFile to fd 2...
	I1229 06:55:32.919454  767016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:32.920728  767016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 06:55:32.921290  767016 out.go:368] Setting JSON to false
	I1229 06:55:32.922432  767016 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13082,"bootTime":1766978251,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 06:55:32.922599  767016 start.go:143] virtualization:  
	I1229 06:55:32.927422  767016 out.go:179] * [functional-175099] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1229 06:55:32.931967  767016 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:55:32.932017  767016 notify.go:221] Checking for updates...
	I1229 06:55:32.935808  767016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:55:32.940014  767016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 06:55:32.943880  767016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 06:55:32.947542  767016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 06:55:32.951332  767016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:55:32.955713  767016 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:55:32.956367  767016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:55:32.994936  767016 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:55:32.995035  767016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:55:33.097908  767016 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 06:55:33.087140291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:55:33.098005  767016 docker.go:319] overlay module found
	I1229 06:55:33.102678  767016 out.go:179] * Using the docker driver based on existing profile
	I1229 06:55:33.105650  767016 start.go:309] selected driver: docker
	I1229 06:55:33.105667  767016 start.go:928] validating driver "docker" against &{Name:functional-175099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-175099 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:55:33.105771  767016 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:55:33.109136  767016 out.go:203] 
	W1229 06:55:33.112565  767016 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1229 06:55:33.116074  767016 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-175099 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-175099 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (233.847364ms)

                                                
                                                
-- stdout --
	* [functional-175099] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 06:55:32.673805  766958 out.go:360] Setting OutFile to fd 1 ...
	I1229 06:55:32.673972  766958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:32.673983  766958 out.go:374] Setting ErrFile to fd 2...
	I1229 06:55:32.673989  766958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 06:55:32.675029  766958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 06:55:32.675412  766958 out.go:368] Setting JSON to false
	I1229 06:55:32.676519  766958 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13082,"bootTime":1766978251,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1229 06:55:32.676593  766958 start.go:143] virtualization:  
	I1229 06:55:32.680414  766958 out.go:179] * [functional-175099] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1229 06:55:32.684206  766958 out.go:179]   - MINIKUBE_LOCATION=22353
	I1229 06:55:32.684394  766958 notify.go:221] Checking for updates...
	I1229 06:55:32.690135  766958 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1229 06:55:32.692964  766958 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	I1229 06:55:32.698045  766958 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	I1229 06:55:32.700969  766958 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1229 06:55:32.703867  766958 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1229 06:55:32.707214  766958 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 06:55:32.707796  766958 driver.go:422] Setting default libvirt URI to qemu:///system
	I1229 06:55:32.745807  766958 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1229 06:55:32.745958  766958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 06:55:32.826070  766958 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-29 06:55:32.813799273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 06:55:32.826204  766958 docker.go:319] overlay module found
	I1229 06:55:32.829596  766958 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1229 06:55:32.833424  766958 start.go:309] selected driver: docker
	I1229 06:55:32.833444  766958 start.go:928] validating driver "docker" against &{Name:functional-175099 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1766979815-22353@sha256:20dad5895b49b986a1253c0faab60865204843ac97fd3a6e6210da5896244409 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-175099 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I1229 06:55:32.833546  766958 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1229 06:55:32.838052  766958 out.go:203] 
	W1229 06:55:32.841077  766958 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1229 06:55:32.844743  766958 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-175099 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-175099 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-tjns8" [f46a2c00-525c-4abb-994e-17c137b81378] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-tjns8" [f46a2c00-525c-4abb-994e-17c137b81378] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004020696s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:31756
functional_test.go:1685: http://192.168.49.2:31756: success! body:
Request served by hello-node-connect-5d95464fd4-tjns8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31756
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b9b7e257-e0fa-4617-9354-8a694cbc8484] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004823756s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-175099 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-175099 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-175099 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-175099 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e059f3ae-319d-4dd4-973c-b77ce3e71429] Pending
helpers_test.go:353: "sp-pod" [e059f3ae-319d-4dd4-973c-b77ce3e71429] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004023278s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-175099 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-175099 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-175099 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [98b606e4-1e8a-4d13-97ad-888f3625788a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [98b606e4-1e8a-4d13-97ad-888f3625788a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003678972s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-175099 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh -n functional-175099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cp functional-175099:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4133414093/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh -n functional-175099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh -n functional-175099 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/725078/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /etc/test/nested/copy/725078/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/725078.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /etc/ssl/certs/725078.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/725078.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /usr/share/ca-certificates/725078.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /etc/ssl/certs/51391683.0"
2025/12/29 06:55:48 [DEBUG] GET http://127.0.0.1:40937/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2009: Checking for existence of /etc/ssl/certs/7250782.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /etc/ssl/certs/7250782.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/7250782.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /usr/share/ca-certificates/7250782.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-175099 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo systemctl is-active crio"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh "sudo systemctl is-active crio": exit status 1 (362.753026ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 763847: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-175099 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [95683b86-c27e-4d10-bf9c-5d0cda9f92e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [95683b86-c27e-4d10-bf9c-5d0cda9f92e1] Running
E1229 06:55:07.996354  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003663799s
I1229 06:55:12.078034  725078 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-175099 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.9.239 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-175099 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-175099 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-175099 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-np2tg" [a3094c32-0510-48ea-8c6f-adce9b87e009] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-np2tg" [a3094c32-0510-48ea-8c6f-adce9b87e009] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003192729s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "363.358033ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "65.204403ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "364.120803ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "51.517231ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdany-port2167874975/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766991325498111235" to /tmp/TestFunctionalparallelMountCmdany-port2167874975/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766991325498111235" to /tmp/TestFunctionalparallelMountCmdany-port2167874975/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766991325498111235" to /tmp/TestFunctionalparallelMountCmdany-port2167874975/001/test-1766991325498111235
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (397.559967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:55:25.897596  725078 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 29 06:55 test-1766991325498111235
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh cat /mount-9p/test-1766991325498111235
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-175099 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [2f414b1a-ce69-4553-83cd-7c84e4500847] Pending
helpers_test.go:353: "busybox-mount" [2f414b1a-ce69-4553-83cd-7c84e4500847] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [2f414b1a-ce69-4553-83cd-7c84e4500847] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [2f414b1a-ce69-4553-83cd-7c84e4500847] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003711038s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-175099 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdany-port2167874975/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service list -o json
functional_test.go:1509: Took "605.089104ms" to run "out/minikube-linux-arm64 -p functional-175099 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31194
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31194
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdspecific-port2759960923/001:/mount-9p --alsologtostderr -v=1 --port 33945]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (625.106384ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:55:35.274101  725078 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdspecific-port2759960923/001:/mount-9p --alsologtostderr -v=1 --port 33945] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh "sudo umount -f /mount-9p": exit status 1 (362.583566ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-175099 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdspecific-port2759960923/001:/mount-9p --alsologtostderr -v=1 --port 33945] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T" /mount1: exit status 1 (1.109127908s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1229 06:55:38.383164  725078 retry.go:84] will retry after 400ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-175099 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-175099 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3881876245/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 version -o=json --components
functional_test.go:2280: (dbg) Done: out/minikube-linux-arm64 -p functional-175099 version -o=json --components: (1.183331912s)
--- PASS: TestFunctional/parallel/Version/components (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175099 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-175099
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175099 image ls --format short --alsologtostderr:
I1229 06:55:49.726045  770135 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:49.726156  770135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:49.726165  770135 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:49.726172  770135 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:49.726412  770135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 06:55:49.726983  770135 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:49.727100  770135 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:49.727593  770135 cli_runner.go:164] Run: docker container inspect functional-175099 --format={{.State.Status}}
I1229 06:55:49.748692  770135 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:49.748754  770135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175099
I1229 06:55:49.766822  770135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/functional-175099/id_rsa Username:docker}
I1229 06:55:49.874612  770135 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175099 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                       IMAGE                       │        TAG        │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                             │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-175099 │ ce2d2cda2d858 │ 4.78MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                             │ 3.3               │ 3d18732f8686c │ 484kB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ docker.io/library/minikube-local-cache-test       │ functional-175099 │ 8782230541381 │ 30B    │
│ public.ecr.aws/nginx/nginx                        │ alpine            │ 962dbbc0e55ec │ 53.7MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0           │ ddc8422d4d35a │ 48.7MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0           │ 271e49a0ebc56 │ 59.8MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                             │ 3.1               │ 8057e0500773a │ 525kB  │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0           │ c3fcf259c473a │ 83.9MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0           │ 88898f1d1a62a │ 71.1MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0           │ de369f46c2ff5 │ 72.8MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1           │ e08f4d9d2e6ed │ 73.4MB │
│ registry.k8s.io/pause                             │ latest            │ 8cb2091f603e7 │ 240kB  │
└───────────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175099 image ls --format table --alsologtostderr:
I1229 06:55:50.541065  770321 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:50.541242  770321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.541255  770321 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:50.541262  770321 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.541573  770321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 06:55:50.542269  770321 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.542433  770321 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.543002  770321 cli_runner.go:164] Run: docker container inspect functional-175099 --format={{.State.Status}}
I1229 06:55:50.572993  770321 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:50.573053  770321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175099
I1229 06:55:50.599592  770321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/functional-175099/id_rsa Username:docker}
I1229 06:55:50.708892  770321 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175099 image ls --format json --alsologtostderr:
[{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"59800000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8782230541381cf68975cc684c7378bfe43b49b117b857a338780a801fe7b8fa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-175099"],"size":"30"},{"id":"962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"72800000"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"73400000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6
294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"71100000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"c3fcf259c473a57a5d7da116e291619044
91091743512d27467c907c5516f856","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"83900000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"48700000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175099 image ls --format json --alsologtostderr:
I1229 06:55:50.256820  770232 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:50.259110  770232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.259161  770232 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:50.259181  770232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.260489  770232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 06:55:50.261335  770232 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.262860  770232 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.263462  770232 cli_runner.go:164] Run: docker container inspect functional-175099 --format={{.State.Status}}
I1229 06:55:50.312031  770232 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:50.312090  770232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175099
I1229 06:55:50.331338  770232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/functional-175099/id_rsa Username:docker}
I1229 06:55:50.438643  770232 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-175099 image ls --format yaml --alsologtostderr:
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "83900000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "71100000"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "72800000"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "59800000"
- id: 8782230541381cf68975cc684c7378bfe43b49b117b857a338780a801fe7b8fa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-175099
size: "30"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "73400000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 962dbbc0e55ec93371166cf3e1f723875ce281259bb90b8092248398555aff67
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "48700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175099 image ls --format yaml --alsologtostderr:
I1229 06:55:49.961517  770176 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:49.961697  770176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:49.961723  770176 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:49.961743  770176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:49.962098  770176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 06:55:49.962783  770176 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:49.962958  770176 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:49.963519  770176 cli_runner.go:164] Run: docker container inspect functional-175099 --format={{.State.Status}}
I1229 06:55:49.982621  770176 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:49.982677  770176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175099
I1229 06:55:50.008927  770176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/functional-175099/id_rsa Username:docker}
I1229 06:55:50.131361  770176 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-175099 ssh pgrep buildkitd: exit status 1 (369.202629ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image build -t localhost/my-image:functional-175099 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-175099 image build -t localhost/my-image:functional-175099 testdata/build --alsologtostderr: (3.142360129s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-175099 image build -t localhost/my-image:functional-175099 testdata/build --alsologtostderr:
I1229 06:55:50.547169  770320 out.go:360] Setting OutFile to fd 1 ...
I1229 06:55:50.547941  770320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.547953  770320 out.go:374] Setting ErrFile to fd 2...
I1229 06:55:50.547959  770320 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1229 06:55:50.548278  770320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
I1229 06:55:50.548880  770320 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.551162  770320 config.go:182] Loaded profile config "functional-175099": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
I1229 06:55:50.551823  770320 cli_runner.go:164] Run: docker container inspect functional-175099 --format={{.State.Status}}
I1229 06:55:50.578705  770320 ssh_runner.go:195] Run: systemctl --version
I1229 06:55:50.578767  770320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-175099
I1229 06:55:50.610130  770320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/functional-175099/id_rsa Username:docker}
I1229 06:55:50.719779  770320 build_images.go:162] Building image from path: /tmp/build.2527533527.tar
I1229 06:55:50.719865  770320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1229 06:55:50.731460  770320 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2527533527.tar
I1229 06:55:50.736388  770320 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2527533527.tar: stat -c "%s %y" /var/lib/minikube/build/build.2527533527.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2527533527.tar': No such file or directory
I1229 06:55:50.736413  770320 ssh_runner.go:362] scp /tmp/build.2527533527.tar --> /var/lib/minikube/build/build.2527533527.tar (3072 bytes)
I1229 06:55:50.759073  770320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2527533527
I1229 06:55:50.767460  770320 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2527533527 -xf /var/lib/minikube/build/build.2527533527.tar
I1229 06:55:50.775498  770320 docker.go:364] Building image: /var/lib/minikube/build/build.2527533527
I1229 06:55:50.775568  770320 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-175099 /var/lib/minikube/build/build.2527533527
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c90abb1def88616dbb45fdbe00baf003a32b9bd39decc2bd0ad498219fa3f1d4 done
#8 naming to localhost/my-image:functional-175099 done
#8 DONE 0.1s
I1229 06:55:53.590521  770320 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-175099 /var/lib/minikube/build/build.2527533527: (2.814930626s)
I1229 06:55:53.590590  770320 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2527533527
I1229 06:55:53.599176  770320 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2527533527.tar
I1229 06:55:53.606593  770320 build_images.go:218] Built localhost/my-image:functional-175099 from /tmp/build.2527533527.tar
I1229 06:55:53.606622  770320 build_images.go:134] succeeded building to: functional-175099
I1229 06:55:53.606628  770320 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-175099 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-175099 docker-env) && out/minikube-linux-arm64 status -p functional-175099"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-175099 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-175099
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-175099
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-175099
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (153.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1229 06:56:29.917143  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m32.208181835s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (153.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 kubectl -- rollout status deployment/busybox: (5.272635059s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-29h7r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-9l7t9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-vv5wj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-29h7r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-9l7t9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-vv5wj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-29h7r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-9l7t9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-vv5wj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-29h7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-29h7r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-9l7t9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-9l7t9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-vv5wj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 kubectl -- exec busybox-769dd8b7dd-vv5wj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (65.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node add --alsologtostderr -v 5
E1229 06:58:46.070033  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 06:59:13.758261  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 node add --alsologtostderr -v 5: (1m4.056628707s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5: (1.107686313s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (65.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-007683 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.242509243s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 status --output json --alsologtostderr -v 5: (1.06247232s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp testdata/cp-test.txt ha-007683:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1392954170/001/cp-test_ha-007683.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683:/home/docker/cp-test.txt ha-007683-m02:/home/docker/cp-test_ha-007683_ha-007683-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test_ha-007683_ha-007683-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683:/home/docker/cp-test.txt ha-007683-m03:/home/docker/cp-test_ha-007683_ha-007683-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test_ha-007683_ha-007683-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683:/home/docker/cp-test.txt ha-007683-m04:/home/docker/cp-test_ha-007683_ha-007683-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test_ha-007683_ha-007683-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp testdata/cp-test.txt ha-007683-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1392954170/001/cp-test_ha-007683-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m02:/home/docker/cp-test.txt ha-007683:/home/docker/cp-test_ha-007683-m02_ha-007683.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test_ha-007683-m02_ha-007683.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m02:/home/docker/cp-test.txt ha-007683-m03:/home/docker/cp-test_ha-007683-m02_ha-007683-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test_ha-007683-m02_ha-007683-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m02:/home/docker/cp-test.txt ha-007683-m04:/home/docker/cp-test_ha-007683-m02_ha-007683-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test_ha-007683-m02_ha-007683-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp testdata/cp-test.txt ha-007683-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1392954170/001/cp-test_ha-007683-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m03:/home/docker/cp-test.txt ha-007683:/home/docker/cp-test_ha-007683-m03_ha-007683.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test_ha-007683-m03_ha-007683.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m03:/home/docker/cp-test.txt ha-007683-m02:/home/docker/cp-test_ha-007683-m03_ha-007683-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test_ha-007683-m03_ha-007683-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m03:/home/docker/cp-test.txt ha-007683-m04:/home/docker/cp-test_ha-007683-m03_ha-007683-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test_ha-007683-m03_ha-007683-m04.txt"
E1229 07:00:02.580619  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:02.585892  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:02.596292  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:02.616542  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:02.657743  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:02.738206  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp testdata/cp-test.txt ha-007683-m04:/home/docker/cp-test.txt
E1229 07:00:02.899560  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:03.220055  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1392954170/001/cp-test_ha-007683-m04.txt
E1229 07:00:03.861099  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m04:/home/docker/cp-test.txt ha-007683:/home/docker/cp-test_ha-007683-m04_ha-007683.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test.txt"
E1229 07:00:05.141902  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683 "sudo cat /home/docker/cp-test_ha-007683-m04_ha-007683.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m04:/home/docker/cp-test.txt ha-007683-m02:/home/docker/cp-test_ha-007683-m04_ha-007683-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m02 "sudo cat /home/docker/cp-test_ha-007683-m04_ha-007683-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 cp ha-007683-m04:/home/docker/cp-test.txt ha-007683-m03:/home/docker/cp-test_ha-007683-m04_ha-007683-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 ssh -n ha-007683-m03 "sudo cat /home/docker/cp-test_ha-007683-m04_ha-007683-m03.txt"
E1229 07:00:07.702199  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node stop m02 --alsologtostderr -v 5
E1229 07:00:12.822979  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 node stop m02 --alsologtostderr -v 5: (11.246845099s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5: exit status 7 (786.309711ms)

                                                
                                                
-- stdout --
	ha-007683
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-007683-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-007683-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-007683-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:00:19.077209  792684 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:00:19.077421  792684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:00:19.077448  792684 out.go:374] Setting ErrFile to fd 2...
	I1229 07:00:19.077467  792684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:00:19.077758  792684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:00:19.078009  792684 out.go:368] Setting JSON to false
	I1229 07:00:19.078069  792684 mustload.go:66] Loading cluster: ha-007683
	I1229 07:00:19.078170  792684 notify.go:221] Checking for updates...
	I1229 07:00:19.078584  792684 config.go:182] Loaded profile config "ha-007683": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:00:19.078619  792684 status.go:174] checking status of ha-007683 ...
	I1229 07:00:19.079480  792684 cli_runner.go:164] Run: docker container inspect ha-007683 --format={{.State.Status}}
	I1229 07:00:19.103084  792684 status.go:371] ha-007683 host status = "Running" (err=<nil>)
	I1229 07:00:19.103106  792684 host.go:66] Checking if "ha-007683" exists ...
	I1229 07:00:19.103421  792684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-007683
	I1229 07:00:19.129013  792684 host.go:66] Checking if "ha-007683" exists ...
	I1229 07:00:19.129321  792684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:00:19.129374  792684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-007683
	I1229 07:00:19.147943  792684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33547 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/ha-007683/id_rsa Username:docker}
	I1229 07:00:19.253684  792684 ssh_runner.go:195] Run: systemctl --version
	I1229 07:00:19.260453  792684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:00:19.273965  792684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:00:19.350287  792684 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-29 07:00:19.340601369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:00:19.350832  792684 kubeconfig.go:125] found "ha-007683" server: "https://192.168.49.254:8443"
	I1229 07:00:19.350870  792684 api_server.go:166] Checking apiserver status ...
	I1229 07:00:19.350913  792684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:00:19.364814  792684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2221/cgroup
	I1229 07:00:19.373235  792684 api_server.go:192] apiserver freezer: "11:freezer:/docker/cac7ce01a012264e77c618ed12a09bf1d8d0bd4dbf88e589e1c05875634468b9/kubepods/burstable/poda271cf140377943dd1b948ada7b75ca6/61a2dddee6defefa1fbd8bcf4b7a3187219918405f0d7d164fe1ab5e9a771769"
	I1229 07:00:19.373310  792684 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cac7ce01a012264e77c618ed12a09bf1d8d0bd4dbf88e589e1c05875634468b9/kubepods/burstable/poda271cf140377943dd1b948ada7b75ca6/61a2dddee6defefa1fbd8bcf4b7a3187219918405f0d7d164fe1ab5e9a771769/freezer.state
	I1229 07:00:19.380877  792684 api_server.go:214] freezer state: "THAWED"
	I1229 07:00:19.380906  792684 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 07:00:19.389380  792684 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 07:00:19.389410  792684 status.go:463] ha-007683 apiserver status = Running (err=<nil>)
	I1229 07:00:19.389421  792684 status.go:176] ha-007683 status: &{Name:ha-007683 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:00:19.389437  792684 status.go:174] checking status of ha-007683-m02 ...
	I1229 07:00:19.389743  792684 cli_runner.go:164] Run: docker container inspect ha-007683-m02 --format={{.State.Status}}
	I1229 07:00:19.405884  792684 status.go:371] ha-007683-m02 host status = "Stopped" (err=<nil>)
	I1229 07:00:19.405909  792684 status.go:384] host is not running, skipping remaining checks
	I1229 07:00:19.405916  792684 status.go:176] ha-007683-m02 status: &{Name:ha-007683-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:00:19.405936  792684 status.go:174] checking status of ha-007683-m03 ...
	I1229 07:00:19.406262  792684 cli_runner.go:164] Run: docker container inspect ha-007683-m03 --format={{.State.Status}}
	I1229 07:00:19.422852  792684 status.go:371] ha-007683-m03 host status = "Running" (err=<nil>)
	I1229 07:00:19.422911  792684 host.go:66] Checking if "ha-007683-m03" exists ...
	I1229 07:00:19.423207  792684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-007683-m03
	I1229 07:00:19.440695  792684 host.go:66] Checking if "ha-007683-m03" exists ...
	I1229 07:00:19.441023  792684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:00:19.441071  792684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-007683-m03
	I1229 07:00:19.458943  792684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/ha-007683-m03/id_rsa Username:docker}
	I1229 07:00:19.565903  792684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:00:19.582273  792684 kubeconfig.go:125] found "ha-007683" server: "https://192.168.49.254:8443"
	I1229 07:00:19.582302  792684 api_server.go:166] Checking apiserver status ...
	I1229 07:00:19.582343  792684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:00:19.596567  792684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2162/cgroup
	I1229 07:00:19.604954  792684 api_server.go:192] apiserver freezer: "11:freezer:/docker/00f695a9c2a2dd22512a40cfc7956961fc232def6c2b2f6d77634d19d260d908/kubepods/burstable/podb78cfe9c3463e458de4af39933f2e5ef/7d9633261b50f08bc41573667418ccefcf7924c56ac34af11cf2e6e9056aff1e"
	I1229 07:00:19.605024  792684 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/00f695a9c2a2dd22512a40cfc7956961fc232def6c2b2f6d77634d19d260d908/kubepods/burstable/podb78cfe9c3463e458de4af39933f2e5ef/7d9633261b50f08bc41573667418ccefcf7924c56ac34af11cf2e6e9056aff1e/freezer.state
	I1229 07:00:19.613446  792684 api_server.go:214] freezer state: "THAWED"
	I1229 07:00:19.613475  792684 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1229 07:00:19.625419  792684 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1229 07:00:19.625451  792684 status.go:463] ha-007683-m03 apiserver status = Running (err=<nil>)
	I1229 07:00:19.625471  792684 status.go:176] ha-007683-m03 status: &{Name:ha-007683-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:00:19.625489  792684 status.go:174] checking status of ha-007683-m04 ...
	I1229 07:00:19.625815  792684 cli_runner.go:164] Run: docker container inspect ha-007683-m04 --format={{.State.Status}}
	I1229 07:00:19.644876  792684 status.go:371] ha-007683-m04 host status = "Running" (err=<nil>)
	I1229 07:00:19.644897  792684 host.go:66] Checking if "ha-007683-m04" exists ...
	I1229 07:00:19.646185  792684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-007683-m04
	I1229 07:00:19.665623  792684 host.go:66] Checking if "ha-007683-m04" exists ...
	I1229 07:00:19.665955  792684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:00:19.666002  792684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-007683-m04
	I1229 07:00:19.684658  792684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/ha-007683-m04/id_rsa Username:docker}
	I1229 07:00:19.793110  792684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:00:19.807966  792684 status.go:176] ha-007683-m04 status: &{Name:ha-007683-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node start m02 --alsologtostderr -v 5
E1229 07:00:23.063450  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:00:43.544265  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 node start m02 --alsologtostderr -v 5: (43.029933061s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5: (1.272645203s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.120240277s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 stop --alsologtostderr -v 5
E1229 07:01:24.505574  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 stop --alsologtostderr -v 5: (35.609483224s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 start --wait true --alsologtostderr -v 5
E1229 07:02:46.425758  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:03:46.070147  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 start --wait true --alsologtostderr -v 5: (2m35.793665135s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (191.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 node delete m03 --alsologtostderr -v 5: (10.52479693s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 stop --alsologtostderr -v 5
E1229 07:05:02.581932  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 stop --alsologtostderr -v 5: (33.324870709s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5: exit status 7 (116.422817ms)

                                                
                                                
-- stdout --
	ha-007683
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-007683-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-007683-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:05:03.541613  820836 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:05:03.541824  820836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:05:03.541847  820836 out.go:374] Setting ErrFile to fd 2...
	I1229 07:05:03.541867  820836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:05:03.542859  820836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:05:03.543192  820836 out.go:368] Setting JSON to false
	I1229 07:05:03.543235  820836 mustload.go:66] Loading cluster: ha-007683
	I1229 07:05:03.543973  820836 config.go:182] Loaded profile config "ha-007683": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:05:03.543994  820836 status.go:174] checking status of ha-007683 ...
	I1229 07:05:03.544794  820836 cli_runner.go:164] Run: docker container inspect ha-007683 --format={{.State.Status}}
	I1229 07:05:03.545466  820836 notify.go:221] Checking for updates...
	I1229 07:05:03.563997  820836 status.go:371] ha-007683 host status = "Stopped" (err=<nil>)
	I1229 07:05:03.564021  820836 status.go:384] host is not running, skipping remaining checks
	I1229 07:05:03.564028  820836 status.go:176] ha-007683 status: &{Name:ha-007683 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:05:03.564072  820836 status.go:174] checking status of ha-007683-m02 ...
	I1229 07:05:03.564506  820836 cli_runner.go:164] Run: docker container inspect ha-007683-m02 --format={{.State.Status}}
	I1229 07:05:03.594999  820836 status.go:371] ha-007683-m02 host status = "Stopped" (err=<nil>)
	I1229 07:05:03.595017  820836 status.go:384] host is not running, skipping remaining checks
	I1229 07:05:03.595023  820836 status.go:176] ha-007683-m02 status: &{Name:ha-007683-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:05:03.595041  820836 status.go:174] checking status of ha-007683-m04 ...
	I1229 07:05:03.595329  820836 cli_runner.go:164] Run: docker container inspect ha-007683-m04 --format={{.State.Status}}
	I1229 07:05:03.612342  820836 status.go:371] ha-007683-m04 host status = "Stopped" (err=<nil>)
	I1229 07:05:03.612362  820836 status.go:384] host is not running, skipping remaining checks
	I1229 07:05:03.612368  820836 status.go:176] ha-007683-m04 status: &{Name:ha-007683-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1229 07:05:30.269013  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m7.652880756s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 node add --control-plane --alsologtostderr -v 5: (53.941788745s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-007683 status --alsologtostderr -v 5: (1.144193401s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.091636356s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-305054 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-305054 --driver=docker  --container-runtime=docker: (28.448776871s)
--- PASS: TestImageBuild/serial/Setup (28.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-305054
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-305054: (1.554919338s)
--- PASS: TestImageBuild/serial/NormalBuild (1.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-305054
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-305054
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-305054
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-511547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1229 07:08:46.070461  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-511547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m7.210875884s)
--- PASS: TestJSONOutput/start/Command (67.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-511547 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-511547 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (11.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-511547 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-511547 --output=json --user=testUser: (11.152411267s)
--- PASS: TestJSONOutput/stop/Command (11.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-200827 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-200827 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.93163ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d8c8cd9e-f56b-4a97-8671-c8ce5824ca5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-200827] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52efb91f-a0a3-4d18-a3bc-7b1dee61b4f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"48fec85e-1d51-4713-9fbe-adca6e45e828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7eeb1528-1e40-45ee-b06e-6be4304af658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig"}}
	{"specversion":"1.0","id":"0d1edd72-ad7c-4cc2-91ef-91f16f7b3830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube"}}
	{"specversion":"1.0","id":"894d09cd-f554-4205-9940-27ef73751ae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"09816eaf-fd51-418d-b5f2-68ba2a9f2180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"39613f93-09e1-4388-9d1e-f1ce867f63cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-200827" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-200827
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-247036 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-247036 --network=: (25.409281252s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-247036" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-247036
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-247036: (2.282251163s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-522053 --network=bridge
E1229 07:10:02.585573  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:10:09.120267  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-522053 --network=bridge: (28.816507815s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-522053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-522053
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-522053: (2.106191265s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.95s)

                                                
                                    
x
+
TestKicExistingNetwork (30.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1229 07:10:13.400300  725078 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1229 07:10:13.416096  725078 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1229 07:10:13.416224  725078 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1229 07:10:13.416242  725078 cli_runner.go:164] Run: docker network inspect existing-network
W1229 07:10:13.432163  725078 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1229 07:10:13.432197  725078 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1229 07:10:13.432219  725078 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1229 07:10:13.432322  725078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1229 07:10:13.448922  725078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e99902584b0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b2:8c:10:44:52} reservation:<nil>}
I1229 07:10:13.449191  725078 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001f61140}
I1229 07:10:13.449930  725078 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1229 07:10:13.450002  725078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1229 07:10:13.510674  725078 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-696468 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-696468 --network=existing-network: (28.365302974s)
helpers_test.go:176: Cleaning up "existing-network-696468" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-696468
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-696468: (2.109208462s)
I1229 07:10:44.001907  725078 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (30.62s)

                                                
                                    
x
+
TestKicCustomSubnet (30.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-900495 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-900495 --subnet=192.168.60.0/24: (28.302792251s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-900495 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-900495" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-900495
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-900495: (2.199667718s)
--- PASS: TestKicCustomSubnet (30.53s)

                                                
                                    
x
+
TestKicStaticIP (30.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-217805 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-217805 --static-ip=192.168.200.200: (27.855103699s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-217805 ip
helpers_test.go:176: Cleaning up "static-ip-217805" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-217805
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-217805: (2.212421287s)
--- PASS: TestKicStaticIP (30.24s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-179222 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-179222 --driver=docker  --container-runtime=docker: (29.266012048s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-181725 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-181725 --driver=docker  --container-runtime=docker: (32.103107459s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-179222
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-181725
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-181725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-181725
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-181725: (2.247568834s)
helpers_test.go:176: Cleaning up "first-179222" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-179222
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-179222: (2.305340804s)
--- PASS: TestMinikubeProfile (67.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-002545 --memory=3072 --mount-string /tmp/TestMountStartserial1618469227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-002545 --memory=3072 --mount-string /tmp/TestMountStartserial1618469227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.617103783s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-002545 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-004875 --memory=3072 --mount-string /tmp/TestMountStartserial1618469227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-004875 --memory=3072 --mount-string /tmp/TestMountStartserial1618469227/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.919782171s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-004875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.54s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-002545 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-002545 --alsologtostderr -v=5: (1.540939868s)
--- PASS: TestMountStart/serial/DeleteFirst (1.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-004875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-004875
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-004875: (1.284636562s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-004875
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-004875: (9.206079612s)
--- PASS: TestMountStart/serial/RestartStopped (10.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-004875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-271762 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1229 07:13:46.070166  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-271762 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m21.715882198s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-271762 -- rollout status deployment/busybox: (3.941972444s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-5l7v2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-h8d99 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-5l7v2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-h8d99 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-5l7v2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-h8d99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-5l7v2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-5l7v2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-h8d99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-271762 -- exec busybox-769dd8b7dd-h8d99 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-271762 -v=5 --alsologtostderr
E1229 07:15:02.586201  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-271762 -v=5 --alsologtostderr: (33.846403111s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (34.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-271762 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp testdata/cp-test.txt multinode-271762:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2362776621/001/cp-test_multinode-271762.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762:/home/docker/cp-test.txt multinode-271762-m02:/home/docker/cp-test_multinode-271762_multinode-271762-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test_multinode-271762_multinode-271762-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762:/home/docker/cp-test.txt multinode-271762-m03:/home/docker/cp-test_multinode-271762_multinode-271762-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test_multinode-271762_multinode-271762-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp testdata/cp-test.txt multinode-271762-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2362776621/001/cp-test_multinode-271762-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m02:/home/docker/cp-test.txt multinode-271762:/home/docker/cp-test_multinode-271762-m02_multinode-271762.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test_multinode-271762-m02_multinode-271762.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m02:/home/docker/cp-test.txt multinode-271762-m03:/home/docker/cp-test_multinode-271762-m02_multinode-271762-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test_multinode-271762-m02_multinode-271762-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp testdata/cp-test.txt multinode-271762-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2362776621/001/cp-test_multinode-271762-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m03:/home/docker/cp-test.txt multinode-271762:/home/docker/cp-test_multinode-271762-m03_multinode-271762.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762 "sudo cat /home/docker/cp-test_multinode-271762-m03_multinode-271762.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 cp multinode-271762-m03:/home/docker/cp-test.txt multinode-271762-m02:/home/docker/cp-test_multinode-271762-m03_multinode-271762-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 ssh -n multinode-271762-m02 "sudo cat /home/docker/cp-test_multinode-271762-m03_multinode-271762-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-271762 node stop m03: (1.384583076s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-271762 status: exit status 7 (641.007691ms)

                                                
                                                
-- stdout --
	multinode-271762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271762-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271762-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr: exit status 7 (540.558891ms)

                                                
                                                
-- stdout --
	multinode-271762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-271762-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-271762-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:15:45.832981  893674 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:15:45.833163  893674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:45.833193  893674 out.go:374] Setting ErrFile to fd 2...
	I1229 07:15:45.833215  893674 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:15:45.833580  893674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:15:45.833861  893674 out.go:368] Setting JSON to false
	I1229 07:15:45.833924  893674 mustload.go:66] Loading cluster: multinode-271762
	I1229 07:15:45.835180  893674 notify.go:221] Checking for updates...
	I1229 07:15:45.835510  893674 config.go:182] Loaded profile config "multinode-271762": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:15:45.835544  893674 status.go:174] checking status of multinode-271762 ...
	I1229 07:15:45.836888  893674 cli_runner.go:164] Run: docker container inspect multinode-271762 --format={{.State.Status}}
	I1229 07:15:45.856747  893674 status.go:371] multinode-271762 host status = "Running" (err=<nil>)
	I1229 07:15:45.856769  893674 host.go:66] Checking if "multinode-271762" exists ...
	I1229 07:15:45.857086  893674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271762
	I1229 07:15:45.878557  893674 host.go:66] Checking if "multinode-271762" exists ...
	I1229 07:15:45.878923  893674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:15:45.878978  893674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271762
	I1229 07:15:45.902450  893674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33672 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/multinode-271762/id_rsa Username:docker}
	I1229 07:15:46.007181  893674 ssh_runner.go:195] Run: systemctl --version
	I1229 07:15:46.014047  893674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:46.028435  893674 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1229 07:15:46.086607  893674 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-29 07:15:46.076803385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1229 07:15:46.087255  893674 kubeconfig.go:125] found "multinode-271762" server: "https://192.168.67.2:8443"
	I1229 07:15:46.087299  893674 api_server.go:166] Checking apiserver status ...
	I1229 07:15:46.087350  893674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1229 07:15:46.101060  893674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2139/cgroup
	I1229 07:15:46.109539  893674 api_server.go:192] apiserver freezer: "11:freezer:/docker/b86d25f890bd820170112a7609009e880c7a57634c2dba43dc06261489f2b5d3/kubepods/burstable/podcce8a7b5272d6fe08fe2b88d3157b8d7/f7e0b3dadac05be9ab8baf1874e395f476e527f328069ee2bce89276601576a1"
	I1229 07:15:46.109609  893674 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b86d25f890bd820170112a7609009e880c7a57634c2dba43dc06261489f2b5d3/kubepods/burstable/podcce8a7b5272d6fe08fe2b88d3157b8d7/f7e0b3dadac05be9ab8baf1874e395f476e527f328069ee2bce89276601576a1/freezer.state
	I1229 07:15:46.117157  893674 api_server.go:214] freezer state: "THAWED"
	I1229 07:15:46.117193  893674 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1229 07:15:46.125599  893674 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1229 07:15:46.125635  893674 status.go:463] multinode-271762 apiserver status = Running (err=<nil>)
	I1229 07:15:46.125648  893674 status.go:176] multinode-271762 status: &{Name:multinode-271762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:15:46.125700  893674 status.go:174] checking status of multinode-271762-m02 ...
	I1229 07:15:46.126052  893674 cli_runner.go:164] Run: docker container inspect multinode-271762-m02 --format={{.State.Status}}
	I1229 07:15:46.152733  893674 status.go:371] multinode-271762-m02 host status = "Running" (err=<nil>)
	I1229 07:15:46.152766  893674 host.go:66] Checking if "multinode-271762-m02" exists ...
	I1229 07:15:46.153073  893674 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-271762-m02
	I1229 07:15:46.170117  893674 host.go:66] Checking if "multinode-271762-m02" exists ...
	I1229 07:15:46.170423  893674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1229 07:15:46.170467  893674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-271762-m02
	I1229 07:15:46.187829  893674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/22353-723215/.minikube/machines/multinode-271762-m02/id_rsa Username:docker}
	I1229 07:15:46.294434  893674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1229 07:15:46.306977  893674 status.go:176] multinode-271762-m02 status: &{Name:multinode-271762-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:15:46.307052  893674 status.go:174] checking status of multinode-271762-m03 ...
	I1229 07:15:46.307388  893674 cli_runner.go:164] Run: docker container inspect multinode-271762-m03 --format={{.State.Status}}
	I1229 07:15:46.325174  893674 status.go:371] multinode-271762-m03 host status = "Stopped" (err=<nil>)
	I1229 07:15:46.325211  893674 status.go:384] host is not running, skipping remaining checks
	I1229 07:15:46.325218  893674 status.go:176] multinode-271762-m03 status: &{Name:multinode-271762-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-271762 node start m03 -v=5 --alsologtostderr: (8.43527094s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-271762
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-271762
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-271762: (23.222132404s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-271762 --wait=true -v=5 --alsologtostderr
E1229 07:16:25.629993  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-271762 --wait=true -v=5 --alsologtostderr: (51.168973081s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-271762
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-271762 node delete m03: (5.079725627s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-271762 stop: (21.867619365s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-271762 status: exit status 7 (101.803614ms)

                                                
                                                
-- stdout --
	multinode-271762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271762-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr: exit status 7 (88.229973ms)

                                                
                                                
-- stdout --
	multinode-271762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-271762-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1229 07:17:37.907979  907432 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:17:37.908075  907432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:37.908084  907432 out.go:374] Setting ErrFile to fd 2...
	I1229 07:17:37.908090  907432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:17:37.908631  907432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:17:37.908826  907432 out.go:368] Setting JSON to false
	I1229 07:17:37.908858  907432 mustload.go:66] Loading cluster: multinode-271762
	I1229 07:17:37.908917  907432 notify.go:221] Checking for updates...
	I1229 07:17:37.909263  907432 config.go:182] Loaded profile config "multinode-271762": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:17:37.909282  907432 status.go:174] checking status of multinode-271762 ...
	I1229 07:17:37.910126  907432 cli_runner.go:164] Run: docker container inspect multinode-271762 --format={{.State.Status}}
	I1229 07:17:37.926507  907432 status.go:371] multinode-271762 host status = "Stopped" (err=<nil>)
	I1229 07:17:37.926530  907432 status.go:384] host is not running, skipping remaining checks
	I1229 07:17:37.926537  907432 status.go:176] multinode-271762 status: &{Name:multinode-271762 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1229 07:17:37.926566  907432 status.go:174] checking status of multinode-271762-m02 ...
	I1229 07:17:37.926874  907432 cli_runner.go:164] Run: docker container inspect multinode-271762-m02 --format={{.State.Status}}
	I1229 07:17:37.950212  907432 status.go:371] multinode-271762-m02 host status = "Stopped" (err=<nil>)
	I1229 07:17:37.950236  907432 status.go:384] host is not running, skipping remaining checks
	I1229 07:17:37.950243  907432 status.go:176] multinode-271762-m02 status: &{Name:multinode-271762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-271762 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-271762 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (54.567291266s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-271762 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-271762
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-271762-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-271762-m02 --driver=docker  --container-runtime=docker: exit status 14 (90.248467ms)

                                                
                                                
-- stdout --
	* [multinode-271762-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-271762-m02' is duplicated with machine name 'multinode-271762-m02' in profile 'multinode-271762'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-271762-m03 --driver=docker  --container-runtime=docker
E1229 07:18:46.069612  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-271762-m03 --driver=docker  --container-runtime=docker: (29.256666356s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-271762
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-271762: exit status 80 (341.937892ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-271762 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-271762-m03 already exists in multinode-271762-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-271762-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-271762-m03: (2.170982855s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.92s)

                                                
                                    
x
+
TestScheduledStopUnix (100.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-159645 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-159645 --memory=3072 --driver=docker  --container-runtime=docker: (27.567074164s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-159645 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:19:36.899729  921306 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:19:36.899871  921306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:19:36.899895  921306 out.go:374] Setting ErrFile to fd 2...
	I1229 07:19:36.899915  921306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:19:36.900298  921306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:19:36.900639  921306 out.go:368] Setting JSON to false
	I1229 07:19:36.900789  921306 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:19:36.902454  921306 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:19:36.902586  921306 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/scheduled-stop-159645/config.json ...
	I1229 07:19:36.902832  921306 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:19:36.902990  921306 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-159645 -n scheduled-stop-159645
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-159645 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:19:37.352398  921409 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:19:37.352540  921409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:19:37.352552  921409 out.go:374] Setting ErrFile to fd 2...
	I1229 07:19:37.352578  921409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:19:37.352974  921409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:19:37.353332  921409 out.go:368] Setting JSON to false
	I1229 07:19:37.353535  921409 daemonize_unix.go:73] killing process 921329 as it is an old scheduled stop
	I1229 07:19:37.353613  921409 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:19:37.357954  921409 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:19:37.358095  921409 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/scheduled-stop-159645/config.json ...
	I1229 07:19:37.358340  921409 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:19:37.358521  921409 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1229 07:19:37.365452  725078 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/scheduled-stop-159645/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-159645 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-159645 -n scheduled-stop-159645
E1229 07:20:02.579997  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-159645
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-159645 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1229 07:20:03.298098  922135 out.go:360] Setting OutFile to fd 1 ...
	I1229 07:20:03.298222  922135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:20:03.298232  922135 out.go:374] Setting ErrFile to fd 2...
	I1229 07:20:03.298244  922135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1229 07:20:03.298740  922135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22353-723215/.minikube/bin
	I1229 07:20:03.299091  922135 out.go:368] Setting JSON to false
	I1229 07:20:03.299191  922135 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:20:03.300431  922135 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
	I1229 07:20:03.300573  922135 profile.go:143] Saving config to /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/scheduled-stop-159645/config.json ...
	I1229 07:20:03.300825  922135 mustload.go:66] Loading cluster: scheduled-stop-159645
	I1229 07:20:03.301032  922135 config.go:182] Loaded profile config "scheduled-stop-159645": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-159645
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-159645: exit status 7 (62.444239ms)

                                                
                                                
-- stdout --
	scheduled-stop-159645
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-159645 -n scheduled-stop-159645
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-159645 -n scheduled-stop-159645: exit status 7 (63.696135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-159645" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-159645
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-159645: (1.626288548s)
--- PASS: TestScheduledStopUnix (100.79s)

                                                
                                    
x
+
TestSkaffold (134.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1563878360 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-706153 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-706153 --memory=3072 --driver=docker  --container-runtime=docker: (28.692621812s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1563878360 run --minikube-profile skaffold-706153 --kube-context skaffold-706153 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1563878360 run --minikube-profile skaffold-706153 --kube-context skaffold-706153 --status-check=true --port-forward=false --interactive=false: (1m29.933117226s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:353: "leeroy-app-7cd6f784d7-q5wpw" [bbd66281-0295-40dd-8bbc-522e94dd33fe] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003539228s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:353: "leeroy-web-56c69b9d88-gf6fp" [5d541d68-c909-4e84-bb37-a0fa0246f26a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003191154s
helpers_test.go:176: Cleaning up "skaffold-706153" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-706153
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-706153: (3.071325401s)
--- PASS: TestSkaffold (134.32s)

                                                
                                    
x
+
TestInsufficientStorage (12.59s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-879025 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-879025 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.226549115s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bffe6e44-d2dc-413a-9f4f-bbe7285225fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-879025] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"18521d41-ad50-4c38-8c5c-b1a3be25b766","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22353"}}
	{"specversion":"1.0","id":"e4a2b9f8-4b35-4bee-a676-86ac5fb6b3bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b7cc4668-0fad-4a09-9fbf-9d11094fc623","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig"}}
	{"specversion":"1.0","id":"661b286e-c378-4b47-a9cb-c1be03022e2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube"}}
	{"specversion":"1.0","id":"0e372ebb-c6a8-423e-895f-e56001a95632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"19b98511-a50d-402a-b628-e8835f5bd45b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c1600f42-58db-4b4a-8be1-b3c5ac8a321a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"aad25736-5c7f-430c-9c79-47dc15c6a2e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"afb1691a-e793-4730-910b-30a2fc0fe2cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e353bade-4146-4372-8d96-fb074ba116e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"22ac752c-3711-458a-a8f8-02efc80dde33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-879025\" primary control-plane node in \"insufficient-storage-879025\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fd618cd-154b-4963-9729-dcecb19e933a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1766979815-22353 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"148bb7f3-7542-4bfd-bea9-07d6dd48579d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4f66bb2-63c5-4025-a6df-b274e7da3027","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-879025 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-879025 --output=json --layout=cluster: exit status 7 (307.779819ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-879025","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-879025","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:23:14.906098  932677 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-879025" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-879025 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-879025 --output=json --layout=cluster: exit status 7 (318.845013ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-879025","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-879025","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1229 07:23:15.226178  932743 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-879025" does not appear in /home/jenkins/minikube-integration/22353-723215/kubeconfig
	E1229 07:23:15.236300  932743 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/insufficient-storage-879025/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-879025" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-879025
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-879025: (1.737671629s)
--- PASS: TestInsufficientStorage (12.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.241311281 start -p running-upgrade-350135 --memory=3072 --vm-driver=docker  --container-runtime=docker
E1229 07:40:02.580000  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.241311281 start -p running-upgrade-350135 --memory=3072 --vm-driver=docker  --container-runtime=docker: (40.095345742s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-350135 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-350135 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.43143047s)
helpers_test.go:176: Cleaning up "running-upgrade-350135" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-350135
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-350135: (2.109933223s)
--- PASS: TestRunningBinaryUpgrade (89.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1229 07:38:46.070585  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.986509706s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-848210 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-848210 --alsologtostderr: (11.156165469s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-848210 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-848210 status --format={{.Host}}: exit status 7 (159.054324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m36.49929601s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-848210 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (133.373618ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-848210] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-848210
	    minikube start -p kubernetes-upgrade-848210 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8482102 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-848210 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1229 07:44:13.330977  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-848210 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.106828835s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-848210" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-848210
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-848210: (2.5341833s)
--- PASS: TestKubernetesUpgrade (361.68s)

                                                
                                    
x
+
TestMissingContainerUpgrade (95.38s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2908728366 start -p missing-upgrade-020039 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2908728366 start -p missing-upgrade-020039 --memory=3072 --driver=docker  --container-runtime=docker: (35.178884497s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-020039
E1229 07:37:50.283493  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-020039: (10.480693031s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-020039
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-020039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-020039 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.575797434s)
helpers_test.go:176: Cleaning up "missing-upgrade-020039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-020039
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-020039: (2.170677958s)
--- PASS: TestMissingContainerUpgrade (95.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (103.215383ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-198702] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22353
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22353-723215/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22353-723215/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-198702 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1229 07:23:46.070383  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-198702 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.639480415s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-198702 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (12.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (10.856389841s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-198702 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-198702 status -o json: exit status 2 (342.311203ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-198702","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-198702
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-198702: (1.786040925s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (12.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-198702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (10.164687904s)
--- PASS: TestNoKubernetes/serial/Start (10.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22353-723215/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-198702 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-198702 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.412462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-198702
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-198702: (2.238236521s)
--- PASS: TestNoKubernetes/serial/Stop (2.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-198702 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-198702 --driver=docker  --container-runtime=docker: (8.593184018s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-198702 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-198702 "sudo systemctl is-active --quiet service kubelet": exit status 1 (366.708924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (339.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.715177400 start -p stopped-upgrade-908098 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.715177400 start -p stopped-upgrade-908098 --memory=3072 --vm-driver=docker  --container-runtime=docker: (57.483446442s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.715177400 -p stopped-upgrade-908098 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.715177400 -p stopped-upgrade-908098 stop: (10.878808506s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-908098 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1229 07:35:02.580340  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-908098 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m31.509949023s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (339.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-908098
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (80.88s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-670412 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-670412 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m13.835567292s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-670412 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-670412
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-670412: (6.145966079s)
--- PASS: TestPreload/Start-NoPreload-PullImage (80.88s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (49.58s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-670412 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1229 07:42:50.283092  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-670412 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.339561539s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-670412 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (49.58s)

                                                
                                    
x
+
TestPause/serial/Start (69.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-401608 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1229 07:43:29.124380  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:43:46.070427  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-401608 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m9.969261595s)
--- PASS: TestPause/serial/Start (69.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-401608 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-401608 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (43.041035946s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E1229 07:45:02.580204  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m15.774349527s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-401608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-401608 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-401608 --output=json --layout=cluster: exit status 2 (389.516621ms)

                                                
                                                
-- stdout --
	{"Name":"pause-401608","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-401608","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-401608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-401608 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-401608 --alsologtostderr -v=5: (1.046584772s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-401608 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-401608 --alsologtostderr -v=5: (2.652753125s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-401608
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-401608: exit status 1 (19.575541ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-401608: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (51.539488399s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-728759 "pgrep -a kubelet"
I1229 07:45:57.187881  725078 config.go:182] Loaded profile config "auto-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7z6hj" [267e8960-3275-44c7-911f-51164652f63e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7z6hj" [267e8960-3275-44c7-911f-51164652f63e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003986128s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-4nxsx" [f44faea0-78ff-441d-b687-3aebf0c4a19e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004084147s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-728759 "pgrep -a kubelet"
I1229 07:46:16.532547  725078 config.go:182] Loaded profile config "kindnet-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-ff7st" [c5c0942b-603f-49c2-83d0-b7119a98062c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-ff7st" [c5c0942b-603f-49c2-83d0-b7119a98062c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004483047s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.045419528s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (55.539444755s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-6ccfg" [66c67125-7652-4f91-8d6b-7b31a6354fa6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-6ccfg" [66c67125-7652-4f91-8d6b-7b31a6354fa6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.01091782s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-728759 "pgrep -a kubelet"
I1229 07:47:45.691182  725078 config.go:182] Loaded profile config "calico-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zbtft" [7d2f894a-9204-4765-a3da-3c4ff2b38ea1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zbtft" [7d2f894a-9204-4765-a3da-3c4ff2b38ea1] Running
E1229 07:47:50.283551  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00385636s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-728759 "pgrep -a kubelet"
I1229 07:47:49.268801  725078 config.go:182] Loaded profile config "custom-flannel-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-tjkzw" [1972d0c9-1b93-49e3-80b4-1d17ab56e210] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-tjkzw" [1972d0c9-1b93-49e3-80b4-1d17ab56e210] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005479864s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m15.747893165s)
--- PASS: TestNetworkPlugins/group/false/Start (75.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1229 07:48:46.069977  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m11.019263644s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-728759 "pgrep -a kubelet"
I1229 07:49:38.741697  725078 config.go:182] Loaded profile config "enable-default-cni-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-6nrbc" [d9dca26d-7f14-456e-a167-7b9a2e02f8d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-6nrbc" [d9dca26d-7f14-456e-a167-7b9a2e02f8d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00419041s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-728759 "pgrep -a kubelet"
I1229 07:49:39.819103  725078 config.go:182] Loaded profile config "false-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-skbjw" [815fd5a1-d695-4234-ac7c-cd1ddd959aff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-skbjw" [815fd5a1-d695-4234-ac7c-cd1ddd959aff] Running
E1229 07:49:45.630534  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003379075s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (53.363473261s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E1229 07:50:57.453658  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.458916  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.469241  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.489459  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.529731  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.609986  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:57.770342  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:58.090860  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:50:58.731683  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:00.016222  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:02.577259  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (49.10106328s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-728759 "pgrep -a kubelet"
I1229 07:51:06.603548  725078 config.go:182] Loaded profile config "bridge-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-5lsp8" [802a9106-fdf9-4186-aba2-b7b68e97884e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 07:51:07.698395  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-5lsp8" [802a9106-fdf9-4186-aba2-b7b68e97884e] Running
E1229 07:51:11.389392  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:12.669854  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003728964s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-vj2hd" [7a3e92ed-e5c3-424d-b374-44bd7ab7bf12] Running
E1229 07:51:10.110199  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.115467  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.125781  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.146466  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.186723  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.267019  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.427795  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:51:10.748343  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003755802s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-728759 "pgrep -a kubelet"
E1229 07:51:15.231351  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1229 07:51:15.423896  725078 config.go:182] Loaded profile config "flannel-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-728759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-7z4c8" [892b495b-4195-46bb-b56b-f1d30ff5b25b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-7z4c8" [892b495b-4195-46bb-b56b-f1d30ff5b25b] Running
E1229 07:51:20.352389  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004170194s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (72.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-728759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m12.388505939s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (72.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (90.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-084048 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1229 07:52:19.381327  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:32.032921  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.242585  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.247767  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.258027  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.278291  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.318549  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.399219  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.559566  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:39.880481  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:40.520699  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:41.801374  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:44.362306  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.482548  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.590861  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.596210  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.606643  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.626949  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.667270  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.747644  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:49.908002  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:50.228791  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:50.283068  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:50.869417  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-084048 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m30.856979631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (90.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-728759 "pgrep -a kubelet"
I1229 07:52:51.939605  725078 config.go:182] Loaded profile config "kubenet-728759": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-728759 replace --force -f testdata/netcat-deployment.yaml
E1229 07:52:52.150048  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-28p75" [a477848b-8d91-41ac-b9fe-f261708a76ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1229 07:52:54.711233  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-28p75" [a477848b-8d91-41ac-b9fe-f261708a76ad] Running
E1229 07:52:59.723061  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:52:59.832245  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004092039s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-728759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-728759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E1229 07:58:22.377434  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.382703  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.393111  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.413414  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.453781  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.534120  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:22.694615  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-084048 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [eb15407b-9ad9-4842-9d5b-5274ab5a5fb0] Pending
helpers_test.go:353: "busybox" [eb15407b-9ad9-4842-9d5b-5274ab5a5fb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [eb15407b-9ad9-4842-9d5b-5274ab5a5fb0] Running
E1229 07:53:30.553324  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003452491s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-084048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-518821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-518821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m17.119572832s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-084048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-084048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.739215171s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-084048 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-084048 --alsologtostderr -v=3
E1229 07:53:41.302057  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:53:46.070245  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-084048 --alsologtostderr -v=3: (11.616565226s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-084048 -n old-k8s-version-084048
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-084048 -n old-k8s-version-084048: exit status 7 (110.99277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-084048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (62.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-084048 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1229 07:53:53.953354  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:01.164647  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:11.513568  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:38.990731  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:38.996085  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.006610  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.027080  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.067442  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.147711  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.308060  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:39.628766  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.099962  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.105342  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.115789  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.136447  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.176788  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.257236  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.269555  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.417740  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:40.738770  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-084048 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m2.113385426s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-084048 -n old-k8s-version-084048
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (62.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-518821 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e01f2522-2178-41cc-becc-0e567f019efb] Pending
E1229 07:54:41.379268  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:41.550417  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [e01f2522-2178-41cc-becc-0e567f019efb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1229 07:54:42.660413  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:44.110911  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [e01f2522-2178-41cc-becc-0e567f019efb] Running
E1229 07:54:45.221257  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:54:49.232183  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005109709s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-518821 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hhplf" [ce561fb6-bfe9-4297-8b71-8fe29b02aa40] Running
E1229 07:54:50.342470  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003448998s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-518821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-518821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016465397s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-518821 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-518821 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-518821 --alsologtostderr -v=3: (11.508656042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-hhplf" [ce561fb6-bfe9-4297-8b71-8fe29b02aa40] Running
E1229 07:54:59.472344  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:55:00.583462  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004462055s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-084048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-084048 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-084048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-084048 -n old-k8s-version-084048
E1229 07:55:02.580784  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/functional-175099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-084048 -n old-k8s-version-084048: exit status 2 (333.261712ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-084048 -n old-k8s-version-084048
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-084048 -n old-k8s-version-084048: exit status 2 (334.317405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-084048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-084048 -n old-k8s-version-084048
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-084048 -n old-k8s-version-084048
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-518821 -n no-preload-518821
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-518821 -n no-preload-518821: exit status 7 (270.089821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-518821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-518821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-518821 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (53.704097029s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-518821 -n no-preload-518821
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (75.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-365783 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1229 07:55:19.953198  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:55:21.064590  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:55:23.085028  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:55:33.433936  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:55:57.453160  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-365783 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (1m15.124939576s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (75.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jt45w" [c8f81217-cd20-4f55-a10f-ce83179f6aa7] Running
E1229 07:56:00.913700  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:02.025436  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003749908s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-jt45w" [c8f81217-cd20-4f55-a10f-ce83179f6aa7] Running
E1229 07:56:06.866638  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:06.871998  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:06.882312  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:06.902650  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:06.942918  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:07.023281  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:07.183619  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:07.503968  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:08.144361  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.105914  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.111186  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.121464  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.141830  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.182106  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.262497  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003800966s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-518821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1229 07:56:09.422854  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:09.425111  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-518821 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-518821 --alsologtostderr -v=1
E1229 07:56:09.743885  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:10.110152  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:10.384639  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-518821 -n no-preload-518821
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-518821 -n no-preload-518821: exit status 2 (349.136983ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-518821 -n no-preload-518821
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-518821 -n no-preload-518821: exit status 2 (350.282872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-518821 --alsologtostderr -v=1
E1229 07:56:11.665834  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-518821 -n no-preload-518821
E1229 07:56:11.985391  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-518821 -n no-preload-518821
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-023636 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1229 07:56:17.106088  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:19.346333  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-023636 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (36.336749605s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-365783 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [bd98e01c-35b7-4df7-a5dd-d3a5a0fc1251] Pending
helpers_test.go:353: "busybox" [bd98e01c-35b7-4df7-a5dd-d3a5a0fc1251] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1229 07:56:25.143259  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/auto-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:27.346679  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [bd98e01c-35b7-4df7-a5dd-d3a5a0fc1251] Running
E1229 07:56:29.586543  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003531976s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-365783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-365783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-365783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.315739081s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-365783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-365783 --alsologtostderr -v=3
E1229 07:56:37.793755  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kindnet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-365783 --alsologtostderr -v=3: (11.663433275s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365783 -n embed-certs-365783
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365783 -n embed-certs-365783: exit status 7 (98.268322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-365783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (56.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-365783 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1229 07:56:47.827279  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:56:50.067706  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-365783 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (56.205188598s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-365783 -n embed-certs-365783
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (56.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-023636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-023636 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.476351969s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-023636 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-023636 --alsologtostderr -v=3: (11.422293764s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-023636 -n newest-cni-023636
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-023636 -n newest-cni-023636: exit status 7 (109.340426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-023636 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-023636 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-023636 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (17.204890923s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-023636 -n newest-cni-023636
E1229 07:57:22.833992  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-023636 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-023636 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-023636 -n newest-cni-023636
E1229 07:57:23.946061  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-023636 -n newest-cni-023636: exit status 2 (351.079232ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-023636 -n newest-cni-023636
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-023636 -n newest-cni-023636: exit status 2 (363.993624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-023636 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-023636 -n newest-cni-023636
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-023636 -n newest-cni-023636
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-610127 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1229 07:57:31.028149  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:39.243105  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-610127 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (44.535621036s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nfkxg" [671e8faa-c045-4819-a1b3-cda086d3052a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004282898s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-nfkxg" [671e8faa-c045-4819-a1b3-cda086d3052a] Running
E1229 07:57:49.590363  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:50.283869  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/skaffold-706153/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.193937  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.199288  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.209537  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.229784  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.270051  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.350348  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.511095  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:52.831646  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:53.472788  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:57:54.753636  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004214088s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-365783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-365783 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-365783 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365783 -n embed-certs-365783
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365783 -n embed-certs-365783: exit status 2 (455.176302ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365783 -n embed-certs-365783
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365783 -n embed-certs-365783: exit status 2 (454.66023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-365783 --alsologtostderr -v=1
E1229 07:57:57.314714  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-365783 -n embed-certs-365783
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-365783 -n embed-certs-365783
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.83s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.4s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-350601 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-350601 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (4.184872603s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-350601" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-350601
E1229 07:58:06.925589  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/calico-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPreload/PreloadSrc/gcs (4.40s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (13.97s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-008572 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
E1229 07:58:12.675941  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-008572 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=docker: (13.792922651s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-008572" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-008572
--- PASS: TestPreload/PreloadSrc/github (13.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-610127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7a32c0d5-254c-4d85-b161-b6eee882dca1] Pending
helpers_test.go:353: "busybox" [7a32c0d5-254c-4d85-b161-b6eee882dca1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1229 07:58:17.274138  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/custom-flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [7a32c0d5-254c-4d85-b161-b6eee882dca1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003095977s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-610127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.46s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-316237 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-316237" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-316237
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-610127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1229 07:58:23.015465  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:23.655624  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-610127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-610127 --alsologtostderr -v=3
E1229 07:58:24.936805  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:27.497575  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:32.617987  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:33.156266  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-610127 --alsologtostderr -v=3: (10.950484342s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127: exit status 7 (65.139142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-610127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-610127 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0
E1229 07:58:42.858867  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:46.070268  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/addons-762064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:50.707738  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/bridge-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:58:52.948297  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/flannel-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:03.339268  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/old-k8s-version-084048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:14.117322  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/kubenet-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-610127 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0: (52.937987293s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-kfshz" [a030d943-87e9-4e46-94dc-616124cb9faa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003088311s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-kfshz" [a030d943-87e9-4e46-94dc-616124cb9faa] Running
E1229 07:59:38.990093  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/enable-default-cni-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:40.099639  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/false-728759/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003716874s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-610127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-610127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-610127 --alsologtostderr -v=1
E1229 07:59:41.008023  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.013482  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.023747  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.044008  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.084270  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.164583  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1229 07:59:41.324819  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
E1229 07:59:41.644957  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127: exit status 2 (326.180261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127: exit status 2 (327.772406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-610127 --alsologtostderr -v=1
E1229 07:59:42.285354  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-610127 -n default-k8s-diff-port-610127
E1229 07:59:43.565738  725078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22353-723215/.minikube/profiles/no-preload-518821/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                    

Test skip (26/352)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-183483 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-183483" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-183483
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-728759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-728759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-728759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-728759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-728759"

                                                
                                                
----------------------- debugLogs end: cilium-728759 [took: 4.576020518s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-728759" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-728759
--- SKIP: TestNetworkPlugins/group/cilium (4.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-481224" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-481224
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard